I’ve been having an internal debate with myself about whether or not to leave a comment, because I’m struggling to find anything positive to say. I love weekend projects and don't ever want to discourage people from doing them.
But the other side of that debate in my head comes from the way I feel about this idea relative to blogging. I read blogs because I want to hear directly from another human. I also care about the subject matter they’re writing about, but part of the reason I care is because it’s a slice of someone’s perspective. Their life experiences. How a particular subject relates to them personally. How they solved a particular problem. Even when a blog post is purely technical with no personal narrative, it usually provides a snapshot of some aspect of some other human’s life, and the way they structure the words they wrote is a reflection of how their brain works.
I also enjoy seeing the progression of thought over time from authors. They’ll write about something and later realize some nuance was wrong, and they’ll write about their updated understanding.
I worry that the more disconnected authors become from their writing, the less aware they’ll be of the potential impact of their words, and those “aha! that thing I wrote 6 months ago was crap for xyz reason” moments are less likely to happen.
To me, AI-assisted blogging just fundamentally devalues the content. As a reader, I can’t know if what I’m reading is real to the author, or a flourish/detail fed to them by an LLM that they liked. Lately I’ve encountered an increasing number of posts that have those telltale (or obvious) signs, and I immediately stop reading. I’m no longer interested.
If I wanted LLM output about a given subject, I’d go to ChatGPT myself.
I think there’s a place for LLMs and some real value to be gained. But when it comes to blogging (and many types of writing), I think it’s a net negative to society.
I’d like to see a standard emerge where blogs can label themselves “No AI”.
I would say it's cool in the sense that building anything is cool, but I find myself mostly in agreement with your take, although with a caveat.
I can't find the quote now, but someone (I think simonw?) said that they feel a bit of an obligation to spend at least as much time working on writing something as it would take to read it, and I agree with that... if you want me to spend time reading your post, I'd like to know you actually made an effort on it.
For me, writing is thinking, and helps me refine my thinking, so I don't use AI to assist writing process. I agree with the comments that AI writing tends to have a specific voice, and I don't care for that voice and don't want my writing to come across that way.
Where I do find it useful in writing, however, is as an editing pass in an advisory role. I don't ask it to rewrite anything for me, but I will ask it to double-check for excessive passive voice, tone, does it raise unanswered points, etc. I typically write my draft posts in Zed, and use Zed's AI chat panel to throw a request at Claude. The big thing though is not blindly accepting every suggestion the AI makes - I read them, think about it, and sometimes adjust the post based on that feedback. It's a useful sanity checking step and while a real human editor would be preferable, I can't justify the cost to hire an editor for my little blog that probably gets zero hits most days. :)
I’m on board with your caveat. I have no issues with using AI to analyze the writing to identify potential issues. I currently use spell/grammar checking and I have no qualms about that. A more robust proofreader does seem useful.
To your point, I’d want to constrain such a review to identifying structural or consistency issues vs. the AI getting involved with the subject matter itself.
The issue I ran into the few times I wrote something and asked ChatGPT to read it and identify any issues was that it was all too eager to tell me how to massively restructure things. The result read like typical AI slop. This was a prompting issue on my part because I gave it instructions that were too open ended. Careful/restrictive prompting is definitely necessary to make this viable.
I’ve started seeing blogs or other works of media with a “No AI” badge and it makes me glad to see others feel the same way. I’m not anti-AI - I use Cursor extensively at work and in personal projects. What I do not like is presenting (either explicitly or by omission) something almost entirely AI generated as if it were created by a human. It’s fraud, there’s no other area where we’re alright with passing off something generated as hand made.
It makes me happy to hear this about the “No AI” badges you’ve encountered.
> It’s fraud, there’s no other area where we’re alright with passing off something generated as hand made.
This is something that bugs me as well. One of the signals that’s very useful when hiring is to read a person’s blog, GitHub etc. I’d want to know that what I’m reading was actually written by them, especially for roles that require writing in some form.
I definitely agree with part of what you're saying. The AI tools make it much easier to become disconnected from what you're writing.
That being said, did you read this post in particular? I explicitely say "ChatGPT and Claude tends to write in a generic style erases my personal tone of voice. I don't want to create "AI slop", I just want to write more efficiently."
My purpose with this tool is not to just dump what ChatGPT says into a document, but make it easier to research sources, find images, and proofread my writing.
I avoid criticizing what I haven't read. It's possible I misunderstood the project and post. Here are the things that jumped out at me and where my reply came from.
The title "Like cursor, but for blogging" primed me to expect a cursor-like capability, which I associate with iteratively producing generative output. While I find AI-assisted coding a dual-edged sword - it can be useful in the right hands, but risky in the wrong hands - I dislike the idea of text authored in the same way for the reasons I outlined above.
Later on, you mentioned tools you tried including Cursor, and called out that it's optimized for coding, not writing. These examples further reinforced my inclination to mentally model your tool as cursor-like.
Further down, you highlight the "Research Assistant" feature, which includes a "Generate topic suggestions" button. This is where alarm bells started going off in my head. Maybe this is more of a reflection of my own reading habits, but the content I gravitate to is usually about something the author already wanted to write about, and I interpreted this feature as a way to come up with ideas about what to write about.
In the conclusion, you mention adding autocompletion, and those alarm bells got louder. I've spent time experimenting with various auto-completion tools, and in the context of writing, they rub me the wrong way. This is where the concern about a growing disconnect between the author and the content comes from. When every word I type is followed by a computer prompt guiding me in one direction or another, the nature of what I write becomes entirely different. My brain is no longer the generator, but an evaluator. The words written are no longer my own...or are they? Would they have been? Hard to say. But as a form of human-to human communication, it makes me uncomfortable.
> I explicitely say "ChatGPT and Claude tends to write in a generic style erases my personal tone of voice. I don't want to create "AI slop", I just want to write more efficiently."
The way I read this, especially in conjunction with the two bullet points following it, was that you found existing tools insufficient as-is because they replace your personal tone of voice, and this was the motivation to build a tool that doesn't replace your tone of voice. I didn't interpret this as: "I won't use the output of AI in my writing", just that the big existing tools aren't doing a good job of sounding like you.
> My purpose with this tool is not to just dump what ChatGPT says into a document, but make it easier to research sources, find images, and proofread my writing.
I think a tool that is highly constrained to contextual information gathering and helping analyze what you write is useful. I mentioned something similar in a reply above [0]. While I didn't mean to imply you were just dumping LLM output into your posts, I did interpret the post to mean that the LLM had a more active role in authoring the post.
In retrospect, I think it's possible I took all of this the wrong way. But even after reading the post a 2nd time, I can still read it both ways.
This isn't necessarily your fault - I think the issue is that the current tech landscape is filled with tools and people eager to use those tools to pump out content. When I read "write more efficiently", my mind goes to the tools that promise the same thing and accomplish it by generating slop. I realize you explicitly stated you don't want to contribute to slop, but that stated goal and what a tool does are potentially different things.
One suggestion - a very brief demo that shows the UI in operation would go far to clear up these misconceptions. The generative part of my own brain took the static examples and imagined outcomes that may be essentially hallucinations.
My apologies if I've gotten all of this wrong, and sorry for the book-length reply. I'm hoping that brain-dumping what led to my original train of thought will help explain why I wrote what I did.
>In retrospect, I think it's possible I took all of this the wrong way. But even after reading the post a 2nd time, I can still read it both ways.
I think what you're saying resonates with me, and tbh yes this type of tool could definitely go in the direction of having the AI write slop. That wasn't my intention with it, because at that point you could just have chatGPT write the whole thing from the start. But I think the very nature of the tool lends itself to being able to do that.
>This isn't necessarily your fault - I think the issue is that the current tech landscape is filled with tools and people eager to use those tools to pump out content. When I read "write more efficiently", my mind goes to the tools that promise the same thing and accomplish it by generating slop. I realize you explicitly stated you don't want to contribute to slop, but that stated goal and what a tool does are potentially different things.
I think it's an interesting problem. Autocomplete could be great in theory if it actually predicted what I was going to say. If it's putting words in my mouth, I agree, it's not great in this context. The research topics is an attempt at this - the goal is to generate suggested search queries to lookup references, based on the existing content of the blog. These are queries that I would have to do manually otherwise.
>One suggestion - a very brief demo that shows the UI in operation would go far to clear up these misconceptions. The generative part of my own brain took the static examples and imagined outcomes that may be essentially hallucinations.
Great idea, I'll do that next time.
Anyways, thanks for the feedback! I just started blogging recently, so I appreciate it.
For a laugh, Google the phrase "how I built this blog on Gatsby" or "why I rebuilt this blog on next.js" and then go to page 10 of the google results and realize humans are all the same.
This indicates to me that you treat blogging as an obligation rather than a way to share your ideas. I guess if you read enough AI blog spam, blogging itself just feels like a status game of what you've written about, divorced from quality of content? If you're padding out your own ideas with AI filler, why not just make your style more shortform and save your reader's time?
I just use Obsidian's copilot sidebar plugin with a prompt that is essentially:
"Review {activeNote} and point out misspellings or structural errors.
Please ensure to highlight any grammatical issues and carefully examine the text for clarity and coherence.
Additionally, check for punctuation errors, repeated use of words and ensure proper sentence structure.
Do not mention double dashes (--) since they are post-processed into em dashes.
Check any inline HTML for accessibility attributes."
Zero generation, zero automated edit, pure review and manual tweaking. If you want to keep your own voice, that's the only thing you need.
I was expecting the blog post to be different based on the headline. It's really about an AI-powered tool that finds the links to things you want to refer to for you so you don't have to waste time looking up all your own references. It's not actually about writing the post.
I don't see how this tool takes the human out of the writing. It just saves some time looking up links. I think people are judging the post by the headline, but might actually find use for the tool itself.
> [AI] tends to write in a generic style erases my personal tone of voice. I don't want to create "AI slop"
And yet the article has telltale signs of AI content. Dead giveaway? The “conclusion” section - no human writes like that unless they’re doing a high school essay, and most AI slop out there has that predictable structure with an often unnecessary conclusions section. AI just can’t tell the difference or whether it’s actually needed.
I even went through some other articles in this same blog to see if it was just this article or the author consistently adds conclusions to her other articles. Others don’t have a conclusion, this AI-assisted one is the first one with that structure.
I don't like that we're entering an era where humans won't be writing anymore. I might have to stop reading blog posts. Maybe I'll just start reading old books too.
I might have false negatives, of course, but I'm doing my best to manually curate and collect human-written blogs in my blog directory and search engine https://minifeed.net/
I don't understand the appeal. I like writing, especially on my blog. It's fun! Why would I want a computer to do it for me? So it can express itself and I don't get to?
What's next, it rides my bicycle for me? Plays my instrument for me? Fucks my wife on my behalf?
These are all activities I would like to just do myself and I do not need a computer to do for me!
“I want AI to do my laundry and dishes so I can do writing and art, not for it to do my writing and art so I can do the laundry and dishes” - vox populi
I think the appeal is to be able to tell people you have a blog in order to get jobs, or prestige. I have a friend who generates entire articles using ChatGPT. He posts the articles on LinkedIn. It's all very pathetic and sad.
This has certainly happened with me; the documents I write at my workplace often ends up the following the style of <heading> <paragraph> <4-6 bullet points>, which is also the pattern AI slop follows.
The problem with ai is that it always follows the same pattern. You wrote like that for a work document but you wouldn’t use the same structure for a blog article.
I wonder if one could tell an AI to “don't use the typical sectional document structure, free yourself!” to generate different-looking content.
I found that when I tried to use Cursor for blogging I was totally infuriated by it. As happy as I am to have Cursor auto-complete my code all day long, the sensation of having Cursor try to put words into my mouth (or, worse, put thoughts into my head) was very unpleasant.
So I stopped trying to use Cursor for writing and went back to vim.
But I found the unpleasantness of the sensation very surprising! I love using Cursor for writing code, but for some reason, even though I am primarily a programmer, it's much more important to me that my words are a "true expression of my spirit" than that my code is. I couldn't quite figure out why.
The reason I barely use copilot is because it tries to write comments for me. I don’t get why they don’t have a button that says “disable autocomplete within comment blocks” (since the IDE knows which things are comments), but it’s so yucky feeling when it tries to write something I wouldn’t say.
I find the comments Copilot proposes is better than average comment quality for the code base I routinely work on: maybe you are so great you don't need any help ever, but that's not true for the average software dev.
> I find the comments Copilot proposes is better than average comment quality for the code base I routinely work on
Maybe the average is just so bad? The completions I get for comments are document what the code is doing, which is not something that I ever put into comments. It's always:
a) A highlevel (prefixed to the function/block/scope definition) list of steps, input expectations and output expectations for the forthcoming function/block/scope
Or
b) A note explaining why the code does what it does.
A comment repeating the code but in English is useless.
It's not that my comments are infallible, but if I write something wrong/silly it'll be caught during code review. similarly if there's a comment missing before some arcane nonsense nobody will remember in 3 years, then i'd expect a PR reviewer to tell the dev to add one.
Copilot just likes to puke very useless comments whenever I type "//", especially in autocomplete mode (I don't really use the chat mode).
I've had the same experience with writing non-code text. I often use VSCode to write markdown documents on various things and having Copilot chime in to complete my mealplan for the week with 5 different kinds of chicken wraps is a weird feeling lol
I've found that writing, then asking the chatbot for suggestions and selecting the ones I'm happy with is a reasonable approach. Treat it as an editor or a proof reader or a thesaurus on steroids. Not a pattern that really works with autocomplete, I guess.
I read the complete article. It does feel like written by human but only if you don't read it all. So overall, I would have saved time if it was written by AI completely (by choosing to not read after a glance)
I love the approach: keep the creative, human part intact - and in fact free up time for it, by letting AI take care of the menial tasks. And to all naysayers, yes, research is a menial task... if you think a True Writer[1] would always do it themselves on Google, keep in mind that just a few years ago a "true writer" would have to go scavenge in some abandoned archives to find reference material, so it's just a matter of perspective.
We built UnitText[2] with the same idea in mind, although we started from the "proofreading/copy-editing" part. Arguably, that's something most don't do at all... but asking someone to read your content, give you feedback, and iterate on it is an extremely valuable part of the process. Having AI do it means you can do it almost for free, and often. Again, freeing up more time for the actual writing.
Doesn't mean a human copy-editor shouldn't review your content before you hit publish, or a writer shouldn't read their references, but AI can help a lot with all those steps.
Looks interesting! I'm curious what kind of editor you used.
I've been thinking of creating a text editor with AI support and have been thinking of implementing something like CRDT for the backend? So that user edits are not overwritten by AI
Are there any chances of open sourcing your project? Thanks!
He made something that currently searches references on a given set of topics. It's not as if this automates thinking or writing any more than spell and grammar check.
There's quite a bit of vitriol in the comments that seems overblown. There's a line where we're typing a few words in to get some AI slop out, but I think this is a long ways away from there.
I see the points others are making, but I would find this a useful tool, I think. IMHO the missing factor gets bypassed on many posts like this (not just on this platform) -- the proof is in the product.
Like any AI tool, the created results can be positive or negative. The churn of poorly written articles and reduplicated imagery has most of us on edge, but that should not negate the purpose and quality of other output.
Some commenters appear to see this tool as just another means to rapidly generate slop content, but I don't see why it should have to be, nor do I think that was the intent of the author / creator.
If what guides the use of the tool is the desire to create a certain piece of content, to express something, and the tool aids in telling the human story, then it's a winning answer, I think.
I’ve been having an internal debate with myself about whether or not to leave a comment, because I’m struggling to find anything positive to say. I love weekend projects and don't ever want to discourage people from doing them.
But the other side of that debate in my head comes from the way I feel about this idea relative to blogging. I read blogs because I want to hear directly from another human. I also care about the subject matter they’re writing about, but part of the reason I care is because it’s a slice of someone’s perspective. Their life experiences. How a particular subject relates to them personally. How they solved a particular problem. Even when a blog post is purely technical with no personal narrative, it usually provides a snapshot of some aspect of some other human’s life, and the way they structure the words they wrote is a reflection of how their brain works.
I also enjoy seeing the progression of thought over time from authors. They’ll write about something and later realize some nuance was wrong, and they’ll write about their updated understanding.
I worry that the more disconnected authors become from their writing, the less aware they’ll be of the potential impact of their words, and those “aha! that thing I wrote 6 months ago was crap for xyz reason” moments are less likely to happen.
To me, AI-assisted blogging just fundamentally devalues the content. As a reader, I can’t know if what I’m reading is real to the author, or a flourish/detail fed to them by an LLM that they liked. Lately I’ve encountered an increasing number of posts that have those telltale (or obvious) signs, and I immediately stop reading. I’m no longer interested.
If I wanted LLM output about a given subject, I’d go to ChatGPT myself.
I think there’s a place for LLMs and some real value to be gained. But when it comes to blogging (and many types of writing), I think it’s a net negative to society.
I’d like to see a standard emerge where blogs can label themselves “No AI”.
I would say it's cool in the sense that building anything is cool, but I find myself mostly in agreement with your take, although with a caveat.
I can't find the quote now, but someone (I think simonw?) said that they feel a bit of an obligation to spend at least as much time working on writing something as it would take to read it, and I agree with that... if you want me to spend time reading your post, I'd like to know you actually made an effort on it.
For me, writing is thinking, and helps me refine my thinking, so I don't use AI to assist writing process. I agree with the comments that AI writing tends to have a specific voice, and I don't care for that voice and don't want my writing to come across that way.
Where I do find it useful in writing, however, is as an editing pass in an advisory role. I don't ask it to rewrite anything for me, but I will ask it to double-check for excessive passive voice, tone, does it raise unanswered points, etc. I typically write my draft posts in Zed, and use Zed's AI chat panel to throw a request at Claude. The big thing though is not blindly accepting every suggestion the AI makes - I read them, think about it, and sometimes adjust the post based on that feedback. It's a useful sanity checking step and while a real human editor would be preferable, I can't justify the cost to hire an editor for my little blog that probably gets zero hits most days. :)
I’m on board with your caveat. I have no issues with using AI to analyze the writing to identify potential issues. I currently use spell/grammar checking and I have no qualms about that. A more robust proofreader does seem useful.
To your point, I’d want to constrain such a review to identifying structural or consistency issues vs. the AI getting involved with the subject matter itself.
The issue I ran into the few times I wrote something and asked ChatGPT to read it and identify any issues was that it was all too eager to tell me how to massively restructure things. The result read like typical AI slop. This was a prompting issue on my part because I gave it instructions that were too open ended. Careful/restrictive prompting is definitely necessary to make this viable.
I’ve started seeing blogs or other works of media with a “No AI” badge and it makes me glad to see others feel the same way. I’m not anti-AI - I use Cursor extensively at work and in personal projects. What I do not like is presenting (either explicitly or by omission) something almost entirely AI generated as if it were created by a human. It’s fraud, there’s no other area where we’re alright with passing off something generated as hand made.
It makes me happy to hear this about the “No AI” badges you’ve encountered.
> It’s fraud, there’s no other area where we’re alright with passing off something generated as hand made.
This is something that bugs me as well. One of the signals that’s very useful when hiring is to read a person’s blog, GitHub etc. I’d want to know that what I’m reading was actually written by them, especially for roles that require writing in some form.
I definitely agree with part of what you're saying. The AI tools make it much easier to become disconnected from what you're writing.
That being said, did you read this post in particular? I explicitely say "ChatGPT and Claude tends to write in a generic style erases my personal tone of voice. I don't want to create "AI slop", I just want to write more efficiently."
My purpose with this tool is not to just dump what ChatGPT says into a document, but make it easier to research sources, find images, and proofread my writing.
I avoid criticizing what I haven't read. It's possible I misunderstood the project and post. Here are the things that jumped out at me and where my reply came from.
The title "Like cursor, but for blogging" primed me to expect a cursor-like capability, which I associate with iteratively producing generative output. While I find AI-assisted coding a dual-edged sword - it can be useful in the right hands, but risky in the wrong hands - I dislike the idea of text authored in the same way for the reasons I outlined above.
Later on, you mentioned tools you tried including Cursor, and called out that it's optimized for coding, not writing. These examples further reinforced my inclination to mentally model your tool as cursor-like.
Further down, you highlight the "Research Assistant" feature, which includes a "Generate topic suggestions" button. This is where alarm bells started going off in my head. Maybe this is more of a reflection of my own reading habits, but the content I gravitate to is usually about something the author already wanted to write about, and I interpreted this feature as a way to come up with ideas about what to write about.
In the conclusion, you mention adding autocompletion, and those alarm bells got louder. I've spent time experimenting with various auto-completion tools, and in the context of writing, they rub me the wrong way. This is where the concern about a growing disconnect between the author and the content comes from. When every word I type is followed by a computer prompt guiding me in one direction or another, the nature of what I write becomes entirely different. My brain is no longer the generator, but an evaluator. The words written are no longer my own...or are they? Would they have been? Hard to say. But as a form of human-to human communication, it makes me uncomfortable.
> I explicitely say "ChatGPT and Claude tends to write in a generic style erases my personal tone of voice. I don't want to create "AI slop", I just want to write more efficiently."
The way I read this, especially in conjunction with the two bullet points following it, was that you found existing tools insufficient as-is because they replace your personal tone of voice, and this was the motivation to build a tool that doesn't replace your tone of voice. I didn't interpret this as: "I won't use the output of AI in my writing", just that the big existing tools aren't doing a good job of sounding like you.
> My purpose with this tool is not to just dump what ChatGPT says into a document, but make it easier to research sources, find images, and proofread my writing.
I think a tool that is highly constrained to contextual information gathering and helping analyze what you write is useful. I mentioned something similar in a reply above [0]. While I didn't mean to imply you were just dumping LLM output into your posts, I did interpret the post to mean that the LLM had a more active role in authoring the post.
In retrospect, I think it's possible I took all of this the wrong way. But even after reading the post a 2nd time, I can still read it both ways.
This isn't necessarily your fault - I think the issue is that the current tech landscape is filled with tools and people eager to use those tools to pump out content. When I read "write more efficiently", my mind goes to the tools that promise the same thing and accomplish it by generating slop. I realize you explicitly stated you don't want to contribute to slop, but that stated goal and what a tool does are potentially different things.
One suggestion - a very brief demo that shows the UI in operation would go far to clear up these misconceptions. The generative part of my own brain took the static examples and imagined outcomes that may be essentially hallucinations.
My apologies if I've gotten all of this wrong, and sorry for the book-length reply. I'm hoping that brain-dumping what led to my original train of thought will help explain why I wrote what I did.
- [0] https://news.ycombinator.com/item?id=43645302
Thanks for the thoughtful reply.
>In retrospect, I think it's possible I took all of this the wrong way. But even after reading the post a 2nd time, I can still read it both ways.
I think what you're saying resonates with me, and tbh yes this type of tool could definitely go in the direction of having the AI write slop. That wasn't my intention with it, because at that point you could just have chatGPT write the whole thing from the start. But I think the very nature of the tool lends itself to being able to do that.
>This isn't necessarily your fault - I think the issue is that the current tech landscape is filled with tools and people eager to use those tools to pump out content. When I read "write more efficiently", my mind goes to the tools that promise the same thing and accomplish it by generating slop. I realize you explicitly stated you don't want to contribute to slop, but that stated goal and what a tool does are potentially different things.
I think it's an interesting problem. Autocomplete could be great in theory if it actually predicted what I was going to say. If it's putting words in my mouth, I agree, it's not great in this context. The research topics is an attempt at this - the goal is to generate suggested search queries to lookup references, based on the existing content of the blog. These are queries that I would have to do manually otherwise.
>One suggestion - a very brief demo that shows the UI in operation would go far to clear up these misconceptions. The generative part of my own brain took the static examples and imagined outcomes that may be essentially hallucinations.
Great idea, I'll do that next time.
Anyways, thanks for the feedback! I just started blogging recently, so I appreciate it.
Thanks for taking the time to talk through this and for putting more thought into this than many of the projects in this category.
This is the 2025 edition of "I have to build a custom static site generator" instead of writing blog posts.
For a laugh, Google the phrase "how I built this blog on Gatsby" or "why I rebuilt this blog on next.js" and then go to page 10 of the google results and realize humans are all the same.
This indicates to me that you treat blogging as an obligation rather than a way to share your ideas. I guess if you read enough AI blog spam, blogging itself just feels like a status game of what you've written about, divorced from quality of content? If you're padding out your own ideas with AI filler, why not just make your style more shortform and save your reader's time?
I just use Obsidian's copilot sidebar plugin with a prompt that is essentially:
"Review {activeNote} and point out misspellings or structural errors. Please ensure to highlight any grammatical issues and carefully examine the text for clarity and coherence. Additionally, check for punctuation errors, repeated use of words and ensure proper sentence structure. Do not mention double dashes (--) since they are post-processed into em dashes. Check any inline HTML for accessibility attributes."
Zero generation, zero automated edit, pure review and manual tweaking. If you want to keep your own voice, that's the only thing you need.
Next step: build an AI powered blog reader.
That way we can just let our machines talk to eachother while we rot away, awesome :D
As long as it generates ad impressions and clicks, sounds like everyone wins?
AI and blogging is is like a wolf and a sheep in the same yard. One is killing the other eventually
I was expecting the blog post to be different based on the headline. It's really about an AI-powered tool that finds the links to things you want to refer to for you so you don't have to waste time looking up all your own references. It's not actually about writing the post.
I don't see how this tool takes the human out of the writing. It just saves some time looking up links. I think people are judging the post by the headline, but might actually find use for the tool itself.
Yeah, reading through the top comments here I can't help but come to the conclusion that they didn't read the actual post!
I find it ironic that the author says
> [AI] tends to write in a generic style erases my personal tone of voice. I don't want to create "AI slop"
And yet the article has telltale signs of AI content. Dead giveaway? The “conclusion” section - no human writes like that unless they’re doing a high school essay, and most AI slop out there has that predictable structure with an often unnecessary conclusions section. AI just can’t tell the difference or whether it’s actually needed.
I even went through some other articles in this same blog to see if it was just this article or the author consistently adds conclusions to her other articles. Others don’t have a conclusion, this AI-assisted one is the first one with that structure.
I don't like that we're entering an era where humans won't be writing anymore. I might have to stop reading blog posts. Maybe I'll just start reading old books too.
I might have false negatives, of course, but I'm doing my best to manually curate and collect human-written blogs in my blog directory and search engine https://minifeed.net/
Maybe I should start curating one of these too. I still have hope for the human-driven internet!
I don't understand the appeal. I like writing, especially on my blog. It's fun! Why would I want a computer to do it for me? So it can express itself and I don't get to?
What's next, it rides my bicycle for me? Plays my instrument for me? Fucks my wife on my behalf?
These are all activities I would like to just do myself and I do not need a computer to do for me!
“I want AI to do my laundry and dishes so I can do writing and art, not for it to do my writing and art so I can do the laundry and dishes” - vox populi
I think the appeal is to be able to tell people you have a blog in order to get jobs, or prestige. I have a friend who generates entire articles using ChatGPT. He posts the articles on LinkedIn. It's all very pathetic and sad.
I wonder when people would start writing more like AI simply because they would be exposed to so many AI-written articles.
This has certainly happened with me; the documents I write at my workplace often ends up the following the style of <heading> <paragraph> <4-6 bullet points>, which is also the pattern AI slop follows.
The problem with ai is that it always follows the same pattern. You wrote like that for a work document but you wouldn’t use the same structure for a blog article.
I wonder if one could tell an AI to “don't use the typical sectional document structure, free yourself!” to generate different-looking content.
It’s an interesting observation that the structure of a piece can reduce its value regardless of the content. Not sure I agree, but interesting.
Funny that you don’t agree with your own implied conclusion. I hinted to no such thing regarding the value of the content.
I found that when I tried to use Cursor for blogging I was totally infuriated by it. As happy as I am to have Cursor auto-complete my code all day long, the sensation of having Cursor try to put words into my mouth (or, worse, put thoughts into my head) was very unpleasant.
So I stopped trying to use Cursor for writing and went back to vim.
But I found the unpleasantness of the sensation very surprising! I love using Cursor for writing code, but for some reason, even though I am primarily a programmer, it's much more important to me that my words are a "true expression of my spirit" than that my code is. I couldn't quite figure out why.
The reason I barely use copilot is because it tries to write comments for me. I don’t get why they don’t have a button that says “disable autocomplete within comment blocks” (since the IDE knows which things are comments), but it’s so yucky feeling when it tries to write something I wouldn’t say.
https://docs.github.com/en/copilot/customizing-copilot/addin...
Not sure that would prevent it offering something as autocomplete in a comment block, but could ask it to write less/no comments generally
I find the comments Copilot proposes is better than average comment quality for the code base I routinely work on: maybe you are so great you don't need any help ever, but that's not true for the average software dev.
> I find the comments Copilot proposes is better than average comment quality for the code base I routinely work on
Maybe the average is just so bad? The completions I get for comments are document what the code is doing, which is not something that I ever put into comments. It's always:
a) A highlevel (prefixed to the function/block/scope definition) list of steps, input expectations and output expectations for the forthcoming function/block/scope
Or
b) A note explaining why the code does what it does.
A comment repeating the code but in English is useless.
The best comments are the type of:
And AndIt's not that my comments are infallible, but if I write something wrong/silly it'll be caught during code review. similarly if there's a comment missing before some arcane nonsense nobody will remember in 3 years, then i'd expect a PR reviewer to tell the dev to add one.
Copilot just likes to puke very useless comments whenever I type "//", especially in autocomplete mode (I don't really use the chat mode).
I've had the same experience with writing non-code text. I often use VSCode to write markdown documents on various things and having Copilot chime in to complete my mealplan for the week with 5 different kinds of chicken wraps is a weird feeling lol
I think code is purely functional - there might be a style component to it, but mostly what I care about is "does it do the thing I want it to do".
Writing, on the other hand, is much more about expressing one's inner self.
I write music and poetry as a hobby, and not once have I found chatGPT to be useful in this regard. Because the words aren't mine.
I've found that writing, then asking the chatbot for suggestions and selecting the ones I'm happy with is a reasonable approach. Treat it as an editor or a proof reader or a thesaurus on steroids. Not a pattern that really works with autocomplete, I guess.
I read the complete article. It does feel like written by human but only if you don't read it all. So overall, I would have saved time if it was written by AI completely (by choosing to not read after a glance)
I love the approach: keep the creative, human part intact - and in fact free up time for it, by letting AI take care of the menial tasks. And to all naysayers, yes, research is a menial task... if you think a True Writer[1] would always do it themselves on Google, keep in mind that just a few years ago a "true writer" would have to go scavenge in some abandoned archives to find reference material, so it's just a matter of perspective.
We built UnitText[2] with the same idea in mind, although we started from the "proofreading/copy-editing" part. Arguably, that's something most don't do at all... but asking someone to read your content, give you feedback, and iterate on it is an extremely valuable part of the process. Having AI do it means you can do it almost for free, and often. Again, freeing up more time for the actual writing.
Doesn't mean a human copy-editor shouldn't review your content before you hit publish, or a writer shouldn't read their references, but AI can help a lot with all those steps.
[1]: https://en.wikipedia.org/wiki/No_true_Scotsman
[2]: https://unittext.com
Why would I need to read someone's AI-generated blog if I can just make the AI generate it for me?
Looks interesting! I'm curious what kind of editor you used.
I've been thinking of creating a text editor with AI support and have been thinking of implementing something like CRDT for the backend? So that user edits are not overwritten by AI
Are there any chances of open sourcing your project? Thanks!
I'm definitely considering open sourcing it yeah! I'll post a follow up if/when I do.
fascinating how many people have hobbies they don't enjoying doing
Do it for the destination, not the journey
That sounds like something that should not apply to your hobby.
/s, I guess
Thats the 4am grindset right there.
His about page gives American Psycho in the 21st Century vibes:
https://www.maximepeabody.com/about
He made something that currently searches references on a given set of topics. It's not as if this automates thinking or writing any more than spell and grammar check.
There's quite a bit of vitriol in the comments that seems overblown. There's a line where we're typing a few words in to get some AI slop out, but I think this is a long ways away from there.
I wonder if there's an overlap between people who'd desire to write AI blog posts and people who only post self-promotion links here.
I see the points others are making, but I would find this a useful tool, I think. IMHO the missing factor gets bypassed on many posts like this (not just on this platform) -- the proof is in the product.
Like any AI tool, the created results can be positive or negative. The churn of poorly written articles and reduplicated imagery has most of us on edge, but that should not negate the purpose and quality of other output. Some commenters appear to see this tool as just another means to rapidly generate slop content, but I don't see why it should have to be, nor do I think that was the intent of the author / creator.
If what guides the use of the tool is the desire to create a certain piece of content, to express something, and the tool aids in telling the human story, then it's a winning answer, I think.
[dead]
[dead]