Why we collect telemetry
...our team needs visibility into how features are being used in practice. We use this data to prioritize our work and evaluate whether features are meeting real user needs.
I'm curious why corporate development teams always feel the need to spy on their users? Is it not sufficient to employ good engineering and design practices? Git has served us well for 20+ years without detailed analytics over who exactly is using which features and commands. Would Git have been significantly better if it had collected telemetry, or would the data not have just been a distraction?
I used to believe that it was not necessary until I started building my own startup. If you dont have analytics you are flying blind. You don't know what your users actually care about and how to optimize a successful user journey. The difference between what people tell you when asked directly and how they actually use your software is actually shocking.
You're only flying blind if you make decisions not looking and thinking. Analytics isn't the only way to figure out "what your users actually care about", you can also try the old school way, commonly referred to as "Talking with people", then after taking notes, you think about it, maybe discuss with others. Don't take what people say at face value, but think about it together with your knowledge and experience, and you'll make even better product decisions than the people who are only making "data driven decisions" all the time.
We do both and they yield different learnings. They are complementary. We also have an issue tracking board with upvotes. I would say to your point that you can't improve what you don't measure.
> The difference between what people tell you when asked directly and how they actually use your software is actually shocking.
And the difference between what they do and what they want is equally shocking. If what they want isn’t in your app, they can’t do it and it won’t show up in your data.
Quantitative data doesn’t tell you what your users want or care about. It tells you only what they are doing. You can get similar data without spying on your users.
I don’t necessarily think all data gathering is equivalent to spying, but if it’s not entirely opt-in, I think it is effectively spying no matter what you’re collecting, varying only along a dimension of invasiveness.
It makes me think, what `gh` features don't generate some activity in the github API that could as easily guide feature development without adding extra telemetry?
I'm pretty ok with the github cli tool team flying blind. The tool isn't exactly a necessary part of any workflow. You don't need telemetry to glean that
> I'm curious why corporate development teams always feel the need to spy on their users? Is it not sufficient to employ good engineering and design practices?
No, because users have different needs and thoughts from the developers. And because sometimes it's hard to get good feedback from people. Maybe everyone loves the concept of feature X, but then never uses it in practice for some reason. Or a given feature has a vocal fan base that won't actually translate to sales/real usage.
> Would Git have been significantly better if it had collected telemetry, or would the data not have just been a distraction?
I think yes, because git famously has a terrible UI, and any amount of telemetry would quickly tell you people fumble around a lot at first.
I imagine that in an alternate world, a git with telemetry would have come out with a less confusing UI because somebody would have looked at the stats and for instance have added "git restore" right from the very start, because "git checkout -- foo.txt" is an absolutely unintuitive command.
Thankfully, github has zero control over git. If they did have control they would have sank the whole operation on year one
> because somebody would have looked at the stats and for instance have added "git restore" right from the very start, because "git checkout -- foo.txt" is an absolutely unintuitive command.
How is git restore any better? Restoring what from when? At least git checkout is clear in what it does.
> I think yes, because git famously has a terrible UI, and any amount of telemetry would quickly tell you people fumble around a lot at first.
1. git doesn’t have a UI, it’s a program run in a terminal environment. the terminal is the interface for the user.
2. git has a specific design that was intended to solve a specific problem in a specific way. mostly for linux kernel development. so, the UX might seem terrible to you — but remember that it wasn’t built for you, nor was it designed for people in their first ever coding boot camp. that was never git’s purpose.
3. the fact that every other tool was designed so poorly that everyone (eventually, mostly) jumped on git as a new standard is an expression of the importance of designing systems well.
"UI" is a category that contains GUI as well as other UIs like TUIs and CLIs. "UX" encompasses a lot of design work that can be distilled into the UI, or into app design, or into documentation, or somewhere else.
A more intuitive git UI would reduce engagement. Do you really want to cut a 30 minute git session down to five minutes by introducing things like 'git restore' or 'git undo'? /s
> I'm curious why corporate development teams always feel the need to spy on their users
Unfortunately this is due to a large part of "decision makers" being non-technical folks, not being able to understand how the tools is actually used, as they don't use such tools themselves. So some product manager "responsible" for development tooling needs this sort of stuff to be able to perform in their job, just as some clueless product manager in the e-commerce absolutely has to overload your frontend with scripts tracking your behaviour, also to be able to perform in their job. Of course the question remains, why do those jobs exist in the first place, as the engineers were perfectly capable of designing interaction with their users before the VCs imposed the unfortunate paradigm of a deeply non-technical person somehow leading the design and development of highly technical products...So here we are, sharing our data with them, because how else will Joe collect their PM paycheck, in between prompting the AI for his slides and various "very important" meetings...
Man if I had a nickle for every time a PM asked me to violate user privacy for the purposes of making a slide that will be shown to their boss for 2.5 seconds I'd probably make enough to actually retire someday.
Just anecdotally, I get the feeling telemetry often does more harm than good, because it's too easy to misinterpret or lie with statistics. There needs to be proper statistical methodology and biases need to be considered, but this doesn't always happen. Maybe a contrived example, but someone wants to show high impact on their next performance review? Implement the new feature in such a way that everyone easily misclicks it, then show the extremely high engagement as demonstration that their work is a huge success. For Git, I'm not sure it would be widely adopted today if the development process was mainly telemetry-driven rather than Torvalds developing it based solely on his expertise and intuition.
Not to mention it's really hard to statistically tell the difference between people spending a lot of time with a feature because it's really useful or because it's really difficult to get to do what you want
Telemetry is a really poor substitute for actually observing a couple of your users. But it's cheap and feels scientific and inclusive/fair (after all you are looking at everyone)
That is just poor analytics IMO, if you have a good harness you can definitely tell if a feature is not well designed. You have to optimize for things like number of clicks to perform an operation not time spent in app.
The impact of a few more network calls and decreased privacy is basically never felt by users beyond this abstract "they're spying on me" realization. The impact of this telemetry for a product development team is material.
Not saying that telemetry more valuable than privacy, just that it's a straightforward decision for a company to make when real benefits are only counterbalanced by abstract privacy concerns. This is why it's so universally applied across apps and tools developed commercially.
For most CLIs, I definitely feel extra network calls because they translate to real latency for commands that _should_ be quick.
If I run "gh alias set foo bar", and that takes even a marginally perceptible amount of time, I'll feel like the tool I'm using is poorly built since a local alias obviously doesn't need network calls.
I do see that `gh` is spawning a child to do sending in the background (https://github.com/cli/cli/blob/3ad29588b8bf9f2390be652f46ee...), which also is something I'd be annoyed at since having background processes lingering in a shell's session is bad manners for a command that doesn't have a very good reason to do so.
It isn't only corporate development teams — open source development teams want to spy on their users, too. For instance, Homebrew: "Anonymous analytics allow us to prioritise fixes and features based on how, where and when people use Homebrew." [1]
> Is it not sufficient to employ good engineering and design practices?
It's not that it's insufficient, new developers, product people and designers literally don't know how to make tasteful and useful decisions without first "asking users" by experimenting on them.
Used to be you built up an intuition for your user base, but considering everyone is changing jobs every year, I guess people don't have time for that anymore, so literally every decision is "data driven" and no user is super happy or not anymore, everyone is just "OK, that's fine".
> I'm curious why corporate development teams always feel the need to spy on their users?
I've repeatedly talked about this on HN; I call it Marketing Driven Development. It's when some Marketing manager goes to your IT manager and starts asking for things that no customer wants or needs, so they can track if their initiatives justify their job, aka are they bringing in more people to x feature?
Honestly, with something as sensitive as software developer tools, I think any sort of telemetry should ALWAYS be off by default.
While I agree, I personally always opt out if I'm aware, and hate it when a tool suddenly gets telemetry, I don't think Git is comparable, same with Linux.
Linux and Git are fully open source, and have big companies contribute to it. If a company like Google, Microsoft etc need a feature, they can usually afford to hire someone and develop _and_ maintain this feature.
Something like gh is the opposite. It's maintained by a singular organisation, the team maintaining this has a finite resources. I don't think it's much to ask for understand what features are being used, what errors might come up, etc.
Arguably yes. git has a terrible developer experience and we've only gotten to this point where everyone embraces it through Stockholm syndrome. If someone had been looking at analytics from git, they'd have seen millions of confused people trying to find the right incantation in a forest of confusing poorly named flags.
I'm curious why corporate development teams always feel the need to spy on their users?
Because they're too shy, lazy, or socially awkward to actually ask their users questions.
They cover up this anxiety and laziness by saying that it costs too much, or it doesn't "scale." Both of these are false.
My company requires me to actually speak to the people who use the web sites I build; usually about every ten to twelve months. The company pays for my time, travel, and other expenses.
The company does this because it cares about the product. It has to, because it is beholden to the customers for its financial position, not to anonymous stock market trading bots a continent away.
Respectfully I think your argument defeats itself. If you can only speak to your users once every 10-12 months it means your process doesn't scale by definition. Good analytics (not useless vanity metrics) should allow you to spot a problem days after it was launched not wait 3 quarters for a user to air their grievances.
Git notoriously has had performance issues and did not scale and has had a horrible user interface. Both of these problems can be measured using telemetry and improvements can be measured once telemetry is in place.
This is where (surprise surprise) I respect Valve. The hardware survey is opt in and transparent. They get useful info out of it and it’s just..not scummy.
There are all sorts of best practices for getting info without vacuuming up everyone’s data in opaque ways.
To be fair, you can be pretty sure they're heavily leveraging all their store data, in loads of ways. They probably sit on the biggest dataset of video game preferences for people in general, and I'm betting they make use of it heavily.
Well, you can start with everything a typical HTTP request and TCP connection comes with, surely they're already storing those things for "anti-fraud practices", wouldn't be far to imagine this data warehouse is used for analytics and product decisions as well.
If you have 3 of your developers spending 80% of their time in an area of the codebase that gets no usage and you don't see a path forward that realistically is likely to increase usage, it can be a better use of developer time to focus them elsewhere or even rethink the feature.
The problem I have with a lot of these analytics is that while there are harmless ways to use it, there is this understanding that they could be tying your unique identifier to behavioral patterns which could be used to reconstruct your identity with machine learning. It's even worse if they include timestamps.
Why not just expose exactly what telemetry is being sent when it's sent? Like add an option that makes telemetry verbose, but doesn't send it unless you enable it. That way you can evaluate it before you decide to turn it on. Whenever you do the Steam Hardware survey it'll show you what gets sent. This is the right way to do it.
Do people think that GitHub isn't already collecting and aggregating all the requests sent to their servers, which is after all the entire point of the gh CLI?
If you don't want your requests tracked, you're going to have to opt out of a lot more than this one setting.
Data is on their server, so obviously they are already doing it, they just want to increase tracking by knowing what transit as well to Gitlab, Codeberg and such by having additional client-side metrics.
I did not get that impression from these docs or from a brief look through the gh CLI codebase. Can you point to evidence that makes you believe this is used to collect metrics about requests to other services?
So happy I deployed gitea to my homelab last month. It's got an import feature from github and honestly just faster and better uptime that github. Claude can use it just fine with tea cli and git. It's pretty much a knockoff github, but I think it's better so far.
I’m running Forgejo which has the same core code and yeah it’s amazing. Faster and better uptime indeed. It even works when my internet goes down because it’s on a Pi 4 here in the cabinet next to my desk Backups are done with borg and syncthing to offsite location. It takes a bit of work setting it up but after that maintenance time is near zero. I just manually SSH in once every two weeks to check SSD space, RAM usage and run apt update and upgrade, and major version bumps
At my current job, I sometimes set up a Nix shell with the GitHub CLI, since that let's Claude Code associate a feature branch to a pull request. The LLM can then retrieve PR description, workflow results, review comments, etc.
Also, I believe GitHub Actions cache cannot be bulk deleted outside of the CLI. The first time I [hesitantly] used the gh CLI was to empty GitHub Actions cache. At the time it wasn't possible with the REST API or web interface.
My last job they used gh features heavily - pull requests, issues, and gha most of all. So having the cli made automating (or interacting with agents) github-specific tasks possible.
PRs, and managing repos, and other things that aren't git features. You can use it to auth with GITHUB_TOKEN instead of ssh or http. Which is how my agents get access. I've switched to gitea, it's got all the same features.
gh is insanely powerful, especially if you let your coding agent use it. It’s one of my top tools. Gh lets you use GitHub features such as issues, pull request, reading CI pipelines, creating CI pipelines, etc. git is just for code version control.
I suggest anyone who cares, and certainly anybody in the EU mails privacy@github.com and also opens a support ticket to let them know exactly what you think
It is interesting how GitHub sort of prominently features this non-word in their article. Perhaps some South Asian or European person for whom English is a struggle.
There is no word that means "fake-anonymous". I would assume that the author of this article intended to write "pseudonymous" which is a real word with a real definition.
Also note that even though you get a warning about an unknown config key, the value is actually set so you're future-proof. Check `grep telemetry ~/.config/gh/config.yml`
... don't forget to recheck this info every update, restore flags that have been "accidentally" reset and set any new flags that they added for "different" telemetry
> gh config set telemetry false
> ! warning: 'telemetry' is not a known configuration key
What's strange is if you check your `~/.config/gh/config.yml` it will put `telemetry: disabled` in there. But it will put anything in that `config.yml` lol.
> gh config set this-is-some-random-bullshit aww-shucks
> ! warning: 'this-is-some-random-bullshit' is not a known configuration key
I used to believe that it was not necessary until I started building my own startup. If you dont have analytics you are flying blind. You don't know what your users actually care about and how to optimize a successful user journey. The difference between what people tell you when asked directly and how they actually use your software is actually shocking.
You're only flying blind if you make decisions not looking and thinking. Analytics isn't the only way to figure out "what your users actually care about", you can also try the old school way, commonly referred to as "Talking with people", then after taking notes, you think about it, maybe discuss with others. Don't take what people say at face value, but think about it together with your knowledge and experience, and you'll make even better product decisions than the people who are only making "data driven decisions" all the time.
We do both and they yield different learnings. They are complementary. We also have an issue tracking board with upvotes. I would say to your point that you can't improve what you don't measure.
> The difference between what people tell you when asked directly and how they actually use your software is actually shocking.
And the difference between what they do and what they want is equally shocking. If what they want isn’t in your app, they can’t do it and it won’t show up in your data.
Quantitative data doesn’t tell you what your users want or care about. It tells you only what they are doing. You can get similar data without spying on your users.
I don’t necessarily think all data gathering is equivalent to spying, but if it’s not entirely opt-in, I think it is effectively spying no matter what you’re collecting, varying only along a dimension of invasiveness.
It makes me think, what `gh` features don't generate some activity in the github API that could as easily guide feature development without adding extra telemetry?
I'm pretty ok with the github cli tool team flying blind. The tool isn't exactly a necessary part of any workflow. You don't need telemetry to glean that
that's akin to saying "i do not need their product therefore i don't care"... so what's your point? someone may have made it part of their workflow!
> I'm curious why corporate development teams always feel the need to spy on their users? Is it not sufficient to employ good engineering and design practices?
No, because users have different needs and thoughts from the developers. And because sometimes it's hard to get good feedback from people. Maybe everyone loves the concept of feature X, but then never uses it in practice for some reason. Or a given feature has a vocal fan base that won't actually translate to sales/real usage.
> Would Git have been significantly better if it had collected telemetry, or would the data not have just been a distraction?
I think yes, because git famously has a terrible UI, and any amount of telemetry would quickly tell you people fumble around a lot at first.
I imagine that in an alternate world, a git with telemetry would have come out with a less confusing UI because somebody would have looked at the stats and for instance have added "git restore" right from the very start, because "git checkout -- foo.txt" is an absolutely unintuitive command.
> because git famously has a terrible UI
Thankfully, github has zero control over git. If they did have control they would have sank the whole operation on year one
> because somebody would have looked at the stats and for instance have added "git restore" right from the very start, because "git checkout -- foo.txt" is an absolutely unintuitive command.
How is git restore any better? Restoring what from when? At least git checkout is clear in what it does.
> I think yes, because git famously has a terrible UI, and any amount of telemetry would quickly tell you people fumble around a lot at first.
1. git doesn’t have a UI, it’s a program run in a terminal environment. the terminal is the interface for the user.
2. git has a specific design that was intended to solve a specific problem in a specific way. mostly for linux kernel development. so, the UX might seem terrible to you — but remember that it wasn’t built for you, nor was it designed for people in their first ever coding boot camp. that was never git’s purpose.
3. the fact that every other tool was designed so poorly that everyone (eventually, mostly) jumped on git as a new standard is an expression of the importance of designing systems well.
"UI" is a category that contains GUI as well as other UIs like TUIs and CLIs. "UX" encompasses a lot of design work that can be distilled into the UI, or into app design, or into documentation, or somewhere else.
UI means "user interface". For a CLI tool the UI is the commands and modifiers it offers on the terminal.
i lump those into user experience (UX) stuff as it’s more leaning towards “flow of user action” etc.
we can split hairs on the definition of things if you want, but it was less important of a point than the system design part.
A more intuitive git UI would reduce engagement. Do you really want to cut a 30 minute git session down to five minutes by introducing things like 'git restore' or 'git undo'? /s
> I'm curious why corporate development teams always feel the need to spy on their users
Unfortunately this is due to a large part of "decision makers" being non-technical folks, not being able to understand how the tools is actually used, as they don't use such tools themselves. So some product manager "responsible" for development tooling needs this sort of stuff to be able to perform in their job, just as some clueless product manager in the e-commerce absolutely has to overload your frontend with scripts tracking your behaviour, also to be able to perform in their job. Of course the question remains, why do those jobs exist in the first place, as the engineers were perfectly capable of designing interaction with their users before the VCs imposed the unfortunate paradigm of a deeply non-technical person somehow leading the design and development of highly technical products...So here we are, sharing our data with them, because how else will Joe collect their PM paycheck, in between prompting the AI for his slides and various "very important" meetings...
Man if I had a nickle for every time a PM asked me to violate user privacy for the purposes of making a slide that will be shown to their boss for 2.5 seconds I'd probably make enough to actually retire someday.
> Would Git have been significantly better if it had collected telemetry, or would the data not have just been a distraction?
I'm not sure if you're implying it's obvious but it's not obvious to me that it would be unhelpful.
Just anecdotally, I get the feeling telemetry often does more harm than good, because it's too easy to misinterpret or lie with statistics. There needs to be proper statistical methodology and biases need to be considered, but this doesn't always happen. Maybe a contrived example, but someone wants to show high impact on their next performance review? Implement the new feature in such a way that everyone easily misclicks it, then show the extremely high engagement as demonstration that their work is a huge success. For Git, I'm not sure it would be widely adopted today if the development process was mainly telemetry-driven rather than Torvalds developing it based solely on his expertise and intuition.
Not to mention it's really hard to statistically tell the difference between people spending a lot of time with a feature because it's really useful or because it's really difficult to get to do what you want
Telemetry is a really poor substitute for actually observing a couple of your users. But it's cheap and feels scientific and inclusive/fair (after all you are looking at everyone)
That is just poor analytics IMO, if you have a good harness you can definitely tell if a feature is not well designed. You have to optimize for things like number of clicks to perform an operation not time spent in app.
The impact of a few more network calls and decreased privacy is basically never felt by users beyond this abstract "they're spying on me" realization. The impact of this telemetry for a product development team is material.
Not saying that telemetry more valuable than privacy, just that it's a straightforward decision for a company to make when real benefits are only counterbalanced by abstract privacy concerns. This is why it's so universally applied across apps and tools developed commercially.
For most CLIs, I definitely feel extra network calls because they translate to real latency for commands that _should_ be quick.
If I run "gh alias set foo bar", and that takes even a marginally perceptible amount of time, I'll feel like the tool I'm using is poorly built since a local alias obviously doesn't need network calls.
I do see that `gh` is spawning a child to do sending in the background (https://github.com/cli/cli/blob/3ad29588b8bf9f2390be652f46ee...), which also is something I'd be annoyed at since having background processes lingering in a shell's session is bad manners for a command that doesn't have a very good reason to do so.
It isn't only corporate development teams — open source development teams want to spy on their users, too. For instance, Homebrew: "Anonymous analytics allow us to prioritise fixes and features based on how, where and when people use Homebrew." [1]
[1] https://docs.brew.sh/Analytics
> Is it not sufficient to employ good engineering and design practices?
It's not that it's insufficient, new developers, product people and designers literally don't know how to make tasteful and useful decisions without first "asking users" by experimenting on them.
Used to be you built up an intuition for your user base, but considering everyone is changing jobs every year, I guess people don't have time for that anymore, so literally every decision is "data driven" and no user is super happy or not anymore, everyone is just "OK, that's fine".
> I'm curious why corporate development teams always feel the need to spy on their users?
I've repeatedly talked about this on HN; I call it Marketing Driven Development. It's when some Marketing manager goes to your IT manager and starts asking for things that no customer wants or needs, so they can track if their initiatives justify their job, aka are they bringing in more people to x feature?
Honestly, with something as sensitive as software developer tools, I think any sort of telemetry should ALWAYS be off by default.
While I agree, I personally always opt out if I'm aware, and hate it when a tool suddenly gets telemetry, I don't think Git is comparable, same with Linux.
Linux and Git are fully open source, and have big companies contribute to it. If a company like Google, Microsoft etc need a feature, they can usually afford to hire someone and develop _and_ maintain this feature.
Something like gh is the opposite. It's maintained by a singular organisation, the team maintaining this has a finite resources. I don't think it's much to ask for understand what features are being used, what errors might come up, etc.
Good news! gh is actually a client of a web API so they can just read their logs to know what's being used!
Arguably yes. git has a terrible developer experience and we've only gotten to this point where everyone embraces it through Stockholm syndrome. If someone had been looking at analytics from git, they'd have seen millions of confused people trying to find the right incantation in a forest of confusing poorly named flags.
Sincerely, a Mercurial user from way back.
https://xkcd.com/1597/
I'm curious why corporate development teams always feel the need to spy on their users?
Because they're too shy, lazy, or socially awkward to actually ask their users questions.
They cover up this anxiety and laziness by saying that it costs too much, or it doesn't "scale." Both of these are false.
My company requires me to actually speak to the people who use the web sites I build; usually about every ten to twelve months. The company pays for my time, travel, and other expenses.
The company does this because it cares about the product. It has to, because it is beholden to the customers for its financial position, not to anonymous stock market trading bots a continent away.
Respectfully I think your argument defeats itself. If you can only speak to your users once every 10-12 months it means your process doesn't scale by definition. Good analytics (not useless vanity metrics) should allow you to spot a problem days after it was launched not wait 3 quarters for a user to air their grievances.
Git notoriously has had performance issues and did not scale and has had a horrible user interface. Both of these problems can be measured using telemetry and improvements can be measured once telemetry is in place.
How was it notorious if git has no telemetry? According to you without telemetry nothing can be known, and nothing can become notorious.
This is where (surprise surprise) I respect Valve. The hardware survey is opt in and transparent. They get useful info out of it and it’s just..not scummy.
There are all sorts of best practices for getting info without vacuuming up everyone’s data in opaque ways.
To be fair, you can be pretty sure they're heavily leveraging all their store data, in loads of ways. They probably sit on the biggest dataset of video game preferences for people in general, and I'm betting they make use of it heavily.
And you think microsoft isn't already doing that?
If you have details on what they’re collecting and how they’re using it/if they’re selling it to advertisers/etc, I’m happy to make a judgment.
I’m not saying they don’t engage in any of those practices, I am specifically talking about the hardware survey.
> If you have details on what they’re collecting
Well, you can start with everything a typical HTTP request and TCP connection comes with, surely they're already storing those things for "anti-fraud practices", wouldn't be far to imagine this data warehouse is used for analytics and product decisions as well.
They are analyzing absolutely every click you make, I can guarantee it.
And if you provide evidence of this (and yes I think it is possible) then I will say it’s bad.
The hardware survey is not that.
If you have 3 of your developers spending 80% of their time in an area of the codebase that gets no usage and you don't see a path forward that realistically is likely to increase usage, it can be a better use of developer time to focus them elsewhere or even rethink the feature.
The problem I have with a lot of these analytics is that while there are harmless ways to use it, there is this understanding that they could be tying your unique identifier to behavioral patterns which could be used to reconstruct your identity with machine learning. It's even worse if they include timestamps.
Why not just expose exactly what telemetry is being sent when it's sent? Like add an option that makes telemetry verbose, but doesn't send it unless you enable it. That way you can evaluate it before you decide to turn it on. Whenever you do the Steam Hardware survey it'll show you what gets sent. This is the right way to do it.
Do people think that GitHub isn't already collecting and aggregating all the requests sent to their servers, which is after all the entire point of the gh CLI?
If you don't want your requests tracked, you're going to have to opt out of a lot more than this one setting.
Data is on their server, so obviously they are already doing it, they just want to increase tracking by knowing what transit as well to Gitlab, Codeberg and such by having additional client-side metrics.
I did not get that impression from these docs or from a brief look through the gh CLI codebase. Can you point to evidence that makes you believe this is used to collect metrics about requests to other services?
Love it when a PR is brief: https://github.com/cli/cli/pull/13254
> Removes the env var that gates telemetry, so it will be on by default.
Not only on by default, it also isn't possible to disable it seems. It's forced on (other than enterprise it seems)
There is a "How to opt out" section in TFA.
Remember that thing Microsoft does?
Embrace, extend, extinguish.
The first two have been done.
I give it five years before the GH CLI is the only way to interact with GitHub repos.
Then the third will also be done, and the cycle is complete.
So happy I deployed gitea to my homelab last month. It's got an import feature from github and honestly just faster and better uptime that github. Claude can use it just fine with tea cli and git. It's pretty much a knockoff github, but I think it's better so far.
I’m running Forgejo which has the same core code and yeah it’s amazing. Faster and better uptime indeed. It even works when my internet goes down because it’s on a Pi 4 here in the cabinet next to my desk Backups are done with borg and syncthing to offsite location. It takes a bit of work setting it up but after that maintenance time is near zero. I just manually SSH in once every two weeks to check SSD space, RAM usage and run apt update and upgrade, and major version bumps
can someone explain why github has a CLI? why wouldn't you just use git?
At my current job, I sometimes set up a Nix shell with the GitHub CLI, since that let's Claude Code associate a feature branch to a pull request. The LLM can then retrieve PR description, workflow results, review comments, etc.
Also, I believe GitHub Actions cache cannot be bulk deleted outside of the CLI. The first time I [hesitantly] used the gh CLI was to empty GitHub Actions cache. At the time it wasn't possible with the REST API or web interface.
You use gh to interact with the forge, git to interact with the repo.
For example
will run and poll the CI checks of a PR and exit 0 once they all passMy last job they used gh features heavily - pull requests, issues, and gha most of all. So having the cli made automating (or interacting with agents) github-specific tasks possible.
Creating PRs, reading PRs, creating/reading Issues, triggering actions, to name a few
PRs, and managing repos, and other things that aren't git features. You can use it to auth with GITHUB_TOKEN instead of ssh or http. Which is how my agents get access. I've switched to gitea, it's got all the same features.
gh is insanely powerful, especially if you let your coding agent use it. It’s one of my top tools. Gh lets you use GitHub features such as issues, pull request, reading CI pipelines, creating CI pipelines, etc. git is just for code version control.
I suggest anyone who cares, and certainly anybody in the EU mails privacy@github.com and also opens a support ticket to let them know exactly what you think
Wouldn't telemetry solve this problem automatically? I mean: they should get some signal back when people opt-out no? :)
Today I learned GitHub has a CLI. I guess that's like Pornhub having a CLI
Gh cli is one of the most powerful tools you can give a coding agent imo
Before GitHub had a CLI, I used cURL (via zsh alises/functions) to open PRs and find what remote/branch a PR is associated with.
Today I use a Golang CLI made with ~200K LOC to do essentially the same thing. Yay, efficiency?
Seeing how annoying their website interfaces are, I'd actually be open to paying for API/CLI access to porn.
what's the last version before telemetry... will want to pin there.
According to their releases page: 2.90.0
just use Radicle and never look back with centralised platforms.
pseudoanonymous = euphemism for not anonoymous.
Regulators should wake up and fine them hard, so hard to become existential. Make an example for others not to follow.
I wonder how robust they are against people sending them fake data.
pseudoanonymous, meaning not anonymous? lol
Yes, that's exactly what pseudoanonymous means. It's fake-anonymous. It can be trivially de-anonymized.
No, "pseudoanonymous" doesn't mean anything, because it is not a word.
https://en.wiktionary.org/w/index.php?search=pseudoanonymous...
It is interesting how GitHub sort of prominently features this non-word in their article. Perhaps some South Asian or European person for whom English is a struggle.
There is no word that means "fake-anonymous". I would assume that the author of this article intended to write "pseudonymous" which is a real word with a real definition.
https://en.wiktionary.org/wiki/pseudonymous
But it would also be interesting if they very much intended the ambiguity of using a non-word that is more than it seems on the surface.
tl;dr for opt-out as per https://cli.github.com/telemetry#how-to-opt-out (any of these work individually):
export GH_TELEMETRY=false
export DO_NOT_TRACK=true
gh config set telemetry disabled (starting from version 2.91.0, which this announcement refers to)
$ gh --version
gh version 2.90.0 (2026-04-16) https://github.com/cli/cli/releases/tag/v2.90.0
$ gh config set telemetry disabled
! warning: 'telemetry' is not a known configuration key
v2.91.0 is the one that's going to introduce it: https://github.com/cli/cli/releases/tag/v2.91.0
Also note that even though you get a warning about an unknown config key, the value is actually set so you're future-proof. Check `grep telemetry ~/.config/gh/config.yml`
... don't forget to recheck this info every update, restore flags that have been "accidentally" reset and set any new flags that they added for "different" telemetry
> gh config set telemetry false > ! warning: 'telemetry' is not a known configuration key
What's strange is if you check your `~/.config/gh/config.yml` it will put `telemetry: disabled` in there. But it will put anything in that `config.yml` lol.
> gh config set this-is-some-random-bullshit aww-shucks > ! warning: 'this-is-some-random-bullshit' is not a known configuration key
But in my config.yml is
this-is-some-random-bullshit: aww-shucks
FWIW, looks to remain disabled by default for enterprise users.