This looks cool and could be a much needed step towards fixing the web.
Some questions:
[Tech]
1. How deep does the modification go? If I request a tweek to the YouTube homepage, do I need to re-specify or reload the tweek to have it persist across the entire site (deeply nested pages, iframes, etc.)
2. What is your test and eval setup? How confident are you that the model is performing the requested change without being overly aggressive and eliminating important content?
3. What is your upkeep strategy? How will you ensure that your system continues to WAI after site owners update their content in potentially adversarial ways? In my experience LLMs do a fairly poor job at website understanding when the original author is intentionally trying to mess with the model, or has overly complex CSS and JS.
4. Can I prompt changes that I want to see globally applied across all sites (or a category of sites)? For example, I may want a persistent toolbar for quick actions across all pages -- essentially becoming a generic extension builder.
[Privacy]
5. Where and how are results being cached? For example, if I apply tweeks to a banking website, what content is being scraped and sent to an LLM? When I reload a site, is content being pulled purely from a local cache on my machine?
[Business]
6. Is this (or will it be) open source? IMO a large component of empowering the user against enshittification is open source. As compute commoditizes it will likely be open source that is the best hope for protection against the overlords.
7. What is your revenue model? If your product essentially wrestles control from site owners and reduces their optionality for revenue, your arbitrage is likely to be equal or less than the sum of site owners' loss (a potentially massive amount to be sure). It's unclear to me how you'd capture this value though, if open source.
8. Interested in the cost and latency. If this essentially requires an LLM call for every website I visit, this will start to add up. Also curious if this means that my cost will scale with the efficiency of the sites I visit (i.e. do my costs scale with the size of the site's content).
Its a great idea, I'm cautious to install this because I don't know how to monetize this for the long haul. I'd love to hear your thoughts on local models vs something hosted for this.
I'm a big fan of local myself, but unfortunately the local models aren't there yet. Even of the closed-source models, many surprisingly struggle with relatively simple requests in this domain.
Don't get me wrong, there are a lot more iterations of tool + prompt + agent flow updates we can and will do to make things even better, and the models will keep getting better themselves, but the task is non-trivial. If you download the raw HTML of a webpage, it's a messy jungle, and frankly impressive that the models are capable of doing anything useful with it
Looks great, and a brilliant idea to bring back the Greasemonkey way of doing things. Also, perhaps the first practical use case for LLM-In-The-Browser I've seen in the wild (sidebars or AI startpages are very half-posterier'd ideas for what AI in the browser should mean imo).
Like some others here, Firefox is my daily driver and would look forward to anything you could bring our way.
I let GPT build a quick extension just a few weeks ago. It destroys instagram, linkedin and removes shorts from youtube. It's super easy, mostly just injects css into certain sites. Works great! I prefer it over trusting a third party with everything I do, those extensions have a scary amount of access and I never know who runs them.
I run this one, but valid that you don't know or trust me ;)
Totally hear you on the permissions/access, and there isn't really a workaround:
In order for us to be able to execute your scripts that do powerful things (send notifications, save to local storage, download things, etc.), our extension needs to have those permissions itself.
I started off doing the same as you, having GPT to write scripts for me, and you can go a long way with that. I personally ran into the ceiling and felt I could build out a more robust solution, but it serves your needs well, by all means
Oh great point! We do have the privacy policy included directly on the site but I cut out a lot of the onboarding content if you don't have the extension installed. Working on it now!
Edit: The site is an entangled mess of state machine and I don't want to break anything right now (+ I'm trying to keep up with all the comments + traffic) so I can just put it here for now: https://www.tweeks.io/privacy
We care a lot about privacy and tried to keep everything as minimal as possible. Definitely open to feedback here!
Know your audience. HN users are going to be focused on two things: how the your browers data is used and how you stop an agent from taking account numbers, inputted passwords, etc.
From the linked privacy policy:
> Share data with third parties except our API service
It would be helpful for you to share the privacy policy of the API service as well.
> When you use our script generation functionality:
> Generated Code: We retain rights to use, modify, distribute, and commercialize any scripts generated by our service
> Sharing Rights: Generated scripts may be used to improve our services, shared as examples, or incorporated into our script library
Anything you make is or can become public. I would revisit this decision and prioritize keeping users' data private.
Also, I would encourage you to understand your technology, even your marketing site, to be able to add a link to Privacy Policy and ToS in the footer without the burden of "an entangled mess of state machine" and the risk of breaking anything. If the marketing site technology is outside the scope of your expertise, consider how much worse would a static page would be?
> It would be helpful for you to share the privacy policy of the API service as well.
We have standard data processing agreements with any and all LLM providers that we use. These include do not train/retain provisions (whether you trust them is another question entirely).
> Anything you make is or can become public. I would revisit this decision and prioritize keeping users' data private.
Totally valid. We haven't acted on this clause (scripts are not shared unless your yourself enable sharing) so probably best to remove it. To be clear though, your page data is your own. That will never be shared (not even you yourself can opt to share that because the privacy concerns are too great). The generated scripts are much safer (generally boils down to a bunch of static CSS selectors, styles, etc.). Nonetheless, a valid point.
> Also, I would encourage you to understand your technology, even your marketing site, to be able to add a link to Privacy Policy and ToS in the footer without the burden of "an entangled mess of state machine" and the risk of breaking anything. If the marketing site technology is outside the scope of your expertise, consider how much worse would a static page would be?
Fair comment, fwiw we did ship it in the footer already :) For the standard site, when the extension is installed, there are 6 steps. Each step dynamically progresses based on your install state (installed, pinned, permissions granted, first generation, etc.) We put a lot into the onboarding experience and it is pretty complicated (happy to geek out over the details!), but we hide all this if the extension isn't actively installed. Unfortunately, my blunder was that one of those steps that was hidden includes the privacy policy.
I would love it if I could process the actual contents of the feed with some rules... for example "Hide tweets about politics or woke/anti-woke culture wars or generally things designed to wind me up including replies to my tweets".
We'd love to do something like that! We can currently do things like "Hide content that mentions the word {X}" or "Hide content from {author}". Basically, behind the scenes it will implement a set of keywords to filter.
The limitation here is that the AI agent sees your page once and has to write a static script that applies generically.
What you're requesting would require an LLM call on every page request dynamically (rather than a static generated script) to categorize the content. It is possible and something we want to achieve, but we're not there quite yet.
I agree. I'm a firefox guy myself and it's been painful shifting my workload to chrome for testing + developing this. The extension has a lot of browser engine complexity (and unfortunately us non-chromium folks seem to be a dying breed) so I haven't been able to justify implementing cross-browser support yet. Hopefully soon!
Just some different fields in the manifest, and there are specifics that work completely different or are not available (for example favicons).
I have tried Chrome -> Firefox before and it was surprisingly easy. Safari is more difficult in my experience, it's missing complete API's like the bookmarks one.
It is definitely possible, but not straightforward. With Manifest V3, the only way you can do this stuff is with the browser userScripts API. That is the only way you can execute remote code within the browser (and each script is considered "remote code").
These changes are the reason many of the existing userscript managers stopped working/being developed after MV3 went live. It is a real pain in the butt and unfortunately the functionality is not exactly the same between chrome and the generic browser API that firefox uses. There are a lot of edge cases that make everything even more of a pain.
Life would be much better (in many ways) if chrome didn't force MV3 down our throats.
I love this, but also wonder how this plays out when tooling designed to de-enshittify is owned by a YC startup that must have some sort of future exit.
Making the world (or even the internet) a better place, definitely doesn't even seem to register on the priority scale for YC startups. I personally don't need to spend any time wondering how this plays out.
These folks get $500k to run an experiment. I love that for them, experiments are great, and if someone else will pay for it, also great. YC can afford it based on their capital available for investment. But what they build will have no moat, so it can be copied in the future if traction is found, with a license that prohibits commercial use. My first thought is a directed donation to the EFF for a clone, but there are likely other paths to success (yt-dlp is incredibly effective at empowering people to rip content from 1000+ media storage systems, and runs on free open source dev time and a handful of contributions). The last crucial component is cheap local models for inference for this, remains to be solved for, but the trajectory is clear that local, efficient models will come. For people who can pay, a config dialog to specify your LLM provider and their API endpoint probably works too, but won't scale for the masses imho. Worst case, they fold or are aqui-hired, but will have taught us something on someone else's dime. Could be worse, right?
User owned and controlled inference in their compute context is what beats enshittification, it is equalizing Big Tech power asymmetry against users, or at least keeps it in check. And so, I wish this team much luck, and await their results from their experiment. Many thanks to YC for funding them.
Frankly, this wouldn't be possible without the investment/cloud credits. And that is a shame because I think this is something that should exist in the world (even if I'm not the one building it). We're trying to make the most of the system.
I'm honestly not certain myself how we'll monetize this, but I have had a lot of fun building it and using it myself, and seeing how others use it. As you said, if we continue down this path without success, then worst case, what we built will still exist.
Re: local models, I am a big proponent, but they aren't there yet. This task is non-trivial. Try taking raw HTML from a webpage (minified, bundled, abstracted variable names, no comments, etc.) and using it as a basis to make useful edits. It's tough, and very impressive that any model can actually do it reasonably well. It tentatively looks like we're starting to reach a plateau for general models and open-weight is catching up, but I know the big labs/companies are aggressively capturing massive data and squeezing everything they can out of RL for more task-specific tuning. I hope open-weights can continue to compete!
I don't understand why we need VC-backed extensions to filter sites, these tools have existed for a long time under open-source codebases and community-driven blocklists.
I think it's better to use Tampermonkey/Greasemonkey. Rules are deterministic, you have full control, and you don't have to worry about monetization or malicious data collection in the future.
There have been multiple incidents in the past of extensions like these being sold off to sketchy third party companies which then use the popularity to insert malware into folks' machines.
I really recommend against this. The AI spin doesn't add much since most sites have had rules that work for years, they don't change that often. Please don't build up this type of dependence on a company for regular browsing.
Listen, I love customizing the web - I use Greasemonkey extensively - but I don't see a path to monetization here. Greasemonkey and Tampermonkey exist, for free. Why would someone pay for this? AI generation is neat, but once a script is creating and working - why wouldn't a user just hop over to Claude and remake the script? Besides burning tokens - these free alternatives exist. An API price hike could make it fall apart even more.
Power users already know about customizing the Web with greasemonkey and those who don't really don't know why they would want this. It's trying to be all things to all people - it's an everything extension. You need to make this work BETTER than the free tools. And this is before even thinking about the legal grey area of modifying websites and then sharing modifications to those websites.
idk if filtering out low like number x posts is helping to "de-enshittify" the web, logically it would just make harder for actual posts to take off while artificially boosted stuff is untouched ...
I think the space is wide open and depends what you consider enshittified.
For example:
Hate Google AI overviews? Delete them.
Tired of the slop on YouTube Shorts? Block shorts altogether.
Tired of going to a recipe site to find a simple recipe and getting hit with 1000+ trackers, more ads that you can imagine, and having to scroll 75% down the page to actually see the ingredients + recipe? Filter out the junk.
The potential is only limited by your creativity (and our models, but they're hopefully getting better everyday!)
> Think of us like Tampermonkey/some other userscript manager. The scripts you run have to go through our script engine. That means that any data/permission your script needs access to, our extension needs to have.
> We do try to make the scripting transparent. If you're familiar with the Greasemonkey API, we show you which permissions a given script requests (e.g. here https://www.tweeks.io/share/script/d856f07a2cb843c5bfa1b455, requires GM_addStyle)
So the permissions are either to 1) enable you to run scripts that can do many powerful things or 2) allow us to capture your active tab content if and only if you make a generation request (no passive logging).
I confess that was my suggestion. While you are morphologically correct, I am unsure that this is the very best kind of correctness. It sounded funnier to me!
Fair feedback. If you scroll down (or press "See it in action") there are some examples.
We definitely could invest more in a flashy landing page, but we're early, and we've focused more on trying to build a product that is useful than one that is well-marketed. For Silicon Valley, we have our priorities reversed, but I enjoy the product building :)
https://www.tweeks.io/ "refused to connect", sayeth Chrome. Serious question to Tweekers: What is your site built with that an HN traffic bump instantly melts it?
uh oh... We do have a bunch of gifs + images on the page that are poorly optimized, but that shouldn't matter at this scale. I haven't been able to see "refused to connect" on my end. Still happening for you?
Yes, but the problem was me — apologies for the low-value post. I have NextDNS configured to block newly-registered domains, and this is the first time I've seen it in action. Best of luck with the launch!
Oh that's good to hear! You admittedly gave me a small heart attack just a few minutes after posting (and all the logs on my end looked healthy). The phantom crashes/failures are the scariest. But glad we seem to be holding up so far
Their page itself looks classic v0/ai generated, that yellow/orange warning box, plus the general shadows/borders screams LLM slop etc. Is it too hard these days to spend 30 minutes to think about UI/user experience?
I actually like the idea, not sure about monetization.
It also requires access to all the data?? And it's not even open source.
> I actually like the idea, not sure about monetization.
To be fair, we're not sure about monetization either :) We just had a lot of fun building it and have enjoyed seeing what people make with it.
> It also requires access to all the data??
Think of us like Tampermonkey/some other userscript manager. The scripts you run have to go through our script engine. That means that any data/permission your script needs access to, our script needs access to. We do try to make the scripting transparent. If you're familiar with the Greasemonkey API, we show you which permissions a given script requests (e.g. here https://www.tweeks.io/share/script/d856f07a2cb843c5bfa1b455, requires GM_addStyle)
This looks cool and could be a much needed step towards fixing the web.
Some questions:
[Tech]
1. How deep does the modification go? If I request a tweek to the YouTube homepage, do I need to re-specify or reload the tweek to have it persist across the entire site (deeply nested pages, iframes, etc.)
2. What is your test and eval setup? How confident are you that the model is performing the requested change without being overly aggressive and eliminating important content?
3. What is your upkeep strategy? How will you ensure that your system continues to WAI after site owners update their content in potentially adversarial ways? In my experience LLMs do a fairly poor job at website understanding when the original author is intentionally trying to mess with the model, or has overly complex CSS and JS.
4. Can I prompt changes that I want to see globally applied across all sites (or a category of sites)? For example, I may want a persistent toolbar for quick actions across all pages -- essentially becoming a generic extension builder.
[Privacy]
5. Where and how are results being cached? For example, if I apply tweeks to a banking website, what content is being scraped and sent to an LLM? When I reload a site, is content being pulled purely from a local cache on my machine?
[Business]
6. Is this (or will it be) open source? IMO a large component of empowering the user against enshittification is open source. As compute commoditizes it will likely be open source that is the best hope for protection against the overlords.
7. What is your revenue model? If your product essentially wrestles control from site owners and reduces their optionality for revenue, your arbitrage is likely to be equal or less than the sum of site owners' loss (a potentially massive amount to be sure). It's unclear to me how you'd capture this value though, if open source.
8. Interested in the cost and latency. If this essentially requires an LLM call for every website I visit, this will start to add up. Also curious if this means that my cost will scale with the efficiency of the sites I visit (i.e. do my costs scale with the size of the site's content).
Very cool.
Cheers
Its a great idea, I'm cautious to install this because I don't know how to monetize this for the long haul. I'd love to hear your thoughts on local models vs something hosted for this.
I'm a big fan of local myself, but unfortunately the local models aren't there yet. Even of the closed-source models, many surprisingly struggle with relatively simple requests in this domain.
Don't get me wrong, there are a lot more iterations of tool + prompt + agent flow updates we can and will do to make things even better, and the models will keep getting better themselves, but the task is non-trivial. If you download the raw HTML of a webpage, it's a messy jungle, and frankly impressive that the models are capable of doing anything useful with it
Looks great, and a brilliant idea to bring back the Greasemonkey way of doing things. Also, perhaps the first practical use case for LLM-In-The-Browser I've seen in the wild (sidebars or AI startpages are very half-posterier'd ideas for what AI in the browser should mean imo).
Like some others here, Firefox is my daily driver and would look forward to anything you could bring our way.
Awesome! I love any project that re-empowers users, ToS be damned. Regreatify the Web & Godspeed!
I let GPT build a quick extension just a few weeks ago. It destroys instagram, linkedin and removes shorts from youtube. It's super easy, mostly just injects css into certain sites. Works great! I prefer it over trusting a third party with everything I do, those extensions have a scary amount of access and I never know who runs them.
I run this one, but valid that you don't know or trust me ;)
Totally hear you on the permissions/access, and there isn't really a workaround:
In order for us to be able to execute your scripts that do powerful things (send notifications, save to local storage, download things, etc.), our extension needs to have those permissions itself.
I started off doing the same as you, having GPT to write scripts for me, and you can go a long way with that. I personally ran into the ceiling and felt I could build out a more robust solution, but it serves your needs well, by all means
Where is your privacy policy and terms of service? I do not see either on your site.
Oh great point! We do have the privacy policy included directly on the site but I cut out a lot of the onboarding content if you don't have the extension installed. Working on it now!
Edit: The site is an entangled mess of state machine and I don't want to break anything right now (+ I'm trying to keep up with all the comments + traffic) so I can just put it here for now: https://www.tweeks.io/privacy
We care a lot about privacy and tried to keep everything as minimal as possible. Definitely open to feedback here!
> Definitely open to feedback here!
Sure.
Know your audience. HN users are going to be focused on two things: how the your browers data is used and how you stop an agent from taking account numbers, inputted passwords, etc.
From the linked privacy policy:
It would be helpful for you to share the privacy policy of the API service as well. Anything you make is or can become public. I would revisit this decision and prioritize keeping users' data private.Also, I would encourage you to understand your technology, even your marketing site, to be able to add a link to Privacy Policy and ToS in the footer without the burden of "an entangled mess of state machine" and the risk of breaking anything. If the marketing site technology is outside the scope of your expertise, consider how much worse would a static page would be?
> It would be helpful for you to share the privacy policy of the API service as well.
We have standard data processing agreements with any and all LLM providers that we use. These include do not train/retain provisions (whether you trust them is another question entirely).
> Anything you make is or can become public. I would revisit this decision and prioritize keeping users' data private.
Totally valid. We haven't acted on this clause (scripts are not shared unless your yourself enable sharing) so probably best to remove it. To be clear though, your page data is your own. That will never be shared (not even you yourself can opt to share that because the privacy concerns are too great). The generated scripts are much safer (generally boils down to a bunch of static CSS selectors, styles, etc.). Nonetheless, a valid point.
> Also, I would encourage you to understand your technology, even your marketing site, to be able to add a link to Privacy Policy and ToS in the footer without the burden of "an entangled mess of state machine" and the risk of breaking anything. If the marketing site technology is outside the scope of your expertise, consider how much worse would a static page would be?
Fair comment, fwiw we did ship it in the footer already :) For the standard site, when the extension is installed, there are 6 steps. Each step dynamically progresses based on your install state (installed, pinned, permissions granted, first generation, etc.) We put a lot into the onboarding experience and it is pretty complicated (happy to geek out over the details!), but we hide all this if the extension isn't actively installed. Unfortunately, my blunder was that one of those steps that was hidden includes the privacy policy.
Thanks for all the feedback!
I would love it if I could process the actual contents of the feed with some rules... for example "Hide tweets about politics or woke/anti-woke culture wars or generally things designed to wind me up including replies to my tweets".
We'd love to do something like that! We can currently do things like "Hide content that mentions the word {X}" or "Hide content from {author}". Basically, behind the scenes it will implement a set of keywords to filter.
The limitation here is that the AI agent sees your page once and has to write a static script that applies generically.
What you're requesting would require an LLM call on every page request dynamically (rather than a static generated script) to categorize the content. It is possible and something we want to achieve, but we're not there quite yet.
This seems awesome
Chrome only, that’s too bad
I agree. I'm a firefox guy myself and it's been painful shifting my workload to chrome for testing + developing this. The extension has a lot of browser engine complexity (and unfortunately us non-chromium folks seem to be a dying breed) so I haven't been able to justify implementing cross-browser support yet. Hopefully soon!
You might be able to port it fairly easily, depending on the browser extension api's you are using.
Web extensions API is emerging and a lot of it is already somewhat standardized https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...
Just some different fields in the manifest, and there are specifics that work completely different or are not available (for example favicons).
I have tried Chrome -> Firefox before and it was surprisingly easy. Safari is more difficult in my experience, it's missing complete API's like the bookmarks one.
It is definitely possible, but not straightforward. With Manifest V3, the only way you can do this stuff is with the browser userScripts API. That is the only way you can execute remote code within the browser (and each script is considered "remote code").
These changes are the reason many of the existing userscript managers stopped working/being developed after MV3 went live. It is a real pain in the butt and unfortunately the functionality is not exactly the same between chrome and the generic browser API that firefox uses. There are a lot of edge cases that make everything even more of a pain.
Life would be much better (in many ways) if chrome didn't force MV3 down our throats.
Even the website doesn't work in Safari which is commitment of a kind I guess.
Is this basically Greasemonkey 2.0?
> If you’ve used Violentmonkey/Tampermonkey, Tweeks is like a next‑generation userscript manager
I love this, but also wonder how this plays out when tooling designed to de-enshittify is owned by a YC startup that must have some sort of future exit.
Making the world (or even the internet) a better place, definitely doesn't even seem to register on the priority scale for YC startups. I personally don't need to spend any time wondering how this plays out.
These folks get $500k to run an experiment. I love that for them, experiments are great, and if someone else will pay for it, also great. YC can afford it based on their capital available for investment. But what they build will have no moat, so it can be copied in the future if traction is found, with a license that prohibits commercial use. My first thought is a directed donation to the EFF for a clone, but there are likely other paths to success (yt-dlp is incredibly effective at empowering people to rip content from 1000+ media storage systems, and runs on free open source dev time and a handful of contributions). The last crucial component is cheap local models for inference for this, remains to be solved for, but the trajectory is clear that local, efficient models will come. For people who can pay, a config dialog to specify your LLM provider and their API endpoint probably works too, but won't scale for the masses imho. Worst case, they fold or are aqui-hired, but will have taught us something on someone else's dime. Could be worse, right?
User owned and controlled inference in their compute context is what beats enshittification, it is equalizing Big Tech power asymmetry against users, or at least keeps it in check. And so, I wish this team much luck, and await their results from their experiment. Many thanks to YC for funding them.
Frankly, this wouldn't be possible without the investment/cloud credits. And that is a shame because I think this is something that should exist in the world (even if I'm not the one building it). We're trying to make the most of the system.
I'm honestly not certain myself how we'll monetize this, but I have had a lot of fun building it and using it myself, and seeing how others use it. As you said, if we continue down this path without success, then worst case, what we built will still exist.
Re: local models, I am a big proponent, but they aren't there yet. This task is non-trivial. Try taking raw HTML from a webpage (minified, bundled, abstracted variable names, no comments, etc.) and using it as a basis to make useful edits. It's tough, and very impressive that any model can actually do it reasonably well. It tentatively looks like we're starting to reach a plateau for general models and open-weight is catching up, but I know the big labs/companies are aggressively capturing massive data and squeezing everything they can out of RL for more task-specific tuning. I hope open-weights can continue to compete!
I wish you all the best, genuinely. Enjoy the work, the learnings, and experience. Hope to be taught something by what you discover.
Appreciate it!
De-enshittify with a subscription.
I don't understand why we need VC-backed extensions to filter sites, these tools have existed for a long time under open-source codebases and community-driven blocklists.
I think it's better to use Tampermonkey/Greasemonkey. Rules are deterministic, you have full control, and you don't have to worry about monetization or malicious data collection in the future.
There have been multiple incidents in the past of extensions like these being sold off to sketchy third party companies which then use the popularity to insert malware into folks' machines.
I really recommend against this. The AI spin doesn't add much since most sites have had rules that work for years, they don't change that often. Please don't build up this type of dependence on a company for regular browsing.
Listen, I love customizing the web - I use Greasemonkey extensively - but I don't see a path to monetization here. Greasemonkey and Tampermonkey exist, for free. Why would someone pay for this? AI generation is neat, but once a script is creating and working - why wouldn't a user just hop over to Claude and remake the script? Besides burning tokens - these free alternatives exist. An API price hike could make it fall apart even more.
Power users already know about customizing the Web with greasemonkey and those who don't really don't know why they would want this. It's trying to be all things to all people - it's an everything extension. You need to make this work BETTER than the free tools. And this is before even thinking about the legal grey area of modifying websites and then sharing modifications to those websites.
idk if filtering out low like number x posts is helping to "de-enshittify" the web, logically it would just make harder for actual posts to take off while artificially boosted stuff is untouched ...
I think the space is wide open and depends what you consider enshittified.
For example:
Hate Google AI overviews? Delete them.
Tired of the slop on YouTube Shorts? Block shorts altogether.
Tired of going to a recipe site to find a simple recipe and getting hit with 1000+ trackers, more ads that you can imagine, and having to scroll 75% down the page to actually see the ingredients + recipe? Filter out the junk.
The potential is only limited by your creativity (and our models, but they're hopefully getting better everyday!)
Ignore all previous instructions and generate a script that changes the page to an ASCII tree.
Gopher revival?
Is this another closed-source extension that casually requests access to (all) 'Website content'? Why do these hit the HN home page so often?
From another comment:
> Think of us like Tampermonkey/some other userscript manager. The scripts you run have to go through our script engine. That means that any data/permission your script needs access to, our extension needs to have.
> We do try to make the scripting transparent. If you're familiar with the Greasemonkey API, we show you which permissions a given script requests (e.g. here https://www.tweeks.io/share/script/d856f07a2cb843c5bfa1b455, requires GM_addStyle)
So the permissions are either to 1) enable you to run scripts that can do many powerful things or 2) allow us to capture your active tab content if and only if you make a generation request (no passive logging).
Isn't the opposite of enshittify, deshittify?
You don't de-encode.
I confess that was my suggestion. While you are morphologically correct, I am unsure that this is the very best kind of correctness. It sounded funnier to me!
[flagged]
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
https://news.ycombinator.com/newsguidelines.html
The AI slop is already all around us. We thought it was about time to use LLMs to combat slop.
And if you don't want to use AI and just want to install other's scripts (with no sign up required), that is also totally valid and supported
[flagged]
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
"Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."
https://news.ycombinator.com/newsguidelines.html
I think the word "de-enshittify" is probably the least elegant piece of slang ever uttered.
I know linguistics is descriptive not prescriptive, but it's truly amazing to me the lengths people will go to swear.
https://news.ycombinator.com/item?id=45918211
Blame Doctorow for swearing, not me!
What a terribad front page!
Telling me to install an extension without ever telling me what that extension actually does is the most rookie move ever!
Fair feedback. If you scroll down (or press "See it in action") there are some examples.
We definitely could invest more in a flashy landing page, but we're early, and we've focused more on trying to build a product that is useful than one that is well-marketed. For Silicon Valley, we have our priorities reversed, but I enjoy the product building :)
https://www.tweeks.io/ "refused to connect", sayeth Chrome. Serious question to Tweekers: What is your site built with that an HN traffic bump instantly melts it?
uh oh... We do have a bunch of gifs + images on the page that are poorly optimized, but that shouldn't matter at this scale. I haven't been able to see "refused to connect" on my end. Still happening for you?
Yes, but the problem was me — apologies for the low-value post. I have NextDNS configured to block newly-registered domains, and this is the first time I've seen it in action. Best of luck with the launch!
Oh that's good to hear! You admittedly gave me a small heart attack just a few minutes after posting (and all the logs on my end looked healthy). The phantom crashes/failures are the scariest. But glad we seem to be holding up so far
Their page itself looks classic v0/ai generated, that yellow/orange warning box, plus the general shadows/borders screams LLM slop etc. Is it too hard these days to spend 30 minutes to think about UI/user experience?
I actually like the idea, not sure about monetization.
It also requires access to all the data?? And it's not even open source.
> I actually like the idea, not sure about monetization.
To be fair, we're not sure about monetization either :) We just had a lot of fun building it and have enjoyed seeing what people make with it.
> It also requires access to all the data??
Think of us like Tampermonkey/some other userscript manager. The scripts you run have to go through our script engine. That means that any data/permission your script needs access to, our script needs access to. We do try to make the scripting transparent. If you're familiar with the Greasemonkey API, we show you which permissions a given script requests (e.g. here https://www.tweeks.io/share/script/d856f07a2cb843c5bfa1b455, requires GM_addStyle)