A lot of text there, but more deflecting than substantive. The central claim seems to be this: «In fact, according to the Orbit Media study shared above, the accuracy of GA4 as compared to Plausible –– due to the cookie consent banner’s presence –– was exactly at 55.6%. More than half of the data was missing, making Google Analytics stats not reliable!»
I think that tries to say that if Plausible generates graphs of 1000 pageviews, GA's graphs are based on 556 of the 1000, and that this makes GA's graphs half as accurate.
Say again? Basing graphs on a sample of half the users doesn't reduce accuracy very much in my experience (which is old). The per-day graphs and reports I generated looked very much like their per-week equivalent, despite being based on 15% of the data.
There's more blah, about how GA is late for example. I didn't really notice any difference between yesterday's data and that from a few days ago. Who cares about such details, really.
Some sentences I didn't notice in the article: "This lack of accuracy is a problem in practice, because …" or "Plausible detects problems with your funnels that GA4 doesn't, because …" or anything like that. They just brag about accuracy and promptness, without any argument that the accuracy or promptness makes a difference.
> Basing graphs on a sample of half the users doesn't reduce accuracy very much
On a random sample - shouldn't reduce it too much. On a sample with the selection biased in almost every category - it's going to be entirely different data as a result.
Yes, indeed. Now, does GA's sample actually have significant bias? If it does, then showing the practical effects should be easy for experts at a competing company, don't you too think so?
A lot of text there, but more deflecting than substantive. The central claim seems to be this: «In fact, according to the Orbit Media study shared above, the accuracy of GA4 as compared to Plausible –– due to the cookie consent banner’s presence –– was exactly at 55.6%. More than half of the data was missing, making Google Analytics stats not reliable!»
I think that tries to say that if Plausible generates graphs of 1000 pageviews, GA's graphs are based on 556 of the 1000, and that this makes GA's graphs half as accurate.
Say again? Basing graphs on a sample of half the users doesn't reduce accuracy very much in my experience (which is old). The per-day graphs and reports I generated looked very much like their per-week equivalent, despite being based on 15% of the data.
There's more blah, about how GA is late for example. I didn't really notice any difference between yesterday's data and that from a few days ago. Who cares about such details, really.
Some sentences I didn't notice in the article: "This lack of accuracy is a problem in practice, because …" or "Plausible detects problems with your funnels that GA4 doesn't, because …" or anything like that. They just brag about accuracy and promptness, without any argument that the accuracy or promptness makes a difference.
Sigh.
> Basing graphs on a sample of half the users doesn't reduce accuracy very much
On a random sample - shouldn't reduce it too much. On a sample with the selection biased in almost every category - it's going to be entirely different data as a result.
Yes, indeed. Now, does GA's sample actually have significant bias? If it does, then showing the practical effects should be easy for experts at a competing company, don't you too think so?
> Now, does GA's sample actually have significant bias?
Yes it does. Both for the adblock https://backlinko.com/ad-blockers-users and the consent banner https://www.advance-metrics.com/en/blog/cookie-behaviour-stu...
> Plausible will reduce your page weight
I believe it.
> and will prevent your site from loading slow.
I don't! :)
A site won't get slow, from Plausible's side (that's the claim). Other factors can definitely affect the overall performance.
> that's the claim
That should be the claim.
[dead]