This dovetails with my own experience more or less exactly: when I launched my company, it was easy to throw everything on Heroku, and to their credit and detriment it was also fairly easy to move pretty much everything _off_ of Heroku (RDS for a database, RedisLabs for, well, Redis, and so on.)
Two years later, though... the inclination has ebbed. Heroku hasn't shipped anything meaningful for my use case in the past 24 months, but also they have been fairly stable. I'm sure I will migrate off of them onto something else in the fullness of time, but it would take a pretty severe precipitating event.
I build an add-on for Heroku[0], have worked for a company on and off that's had all core services on Heroku for over 8 years, and I've put a lot of my side projects etc. on Heroku.
My experiences differ depending on the above, I've mostly use Render or an alternative for side projects now (just due to cost/forgettability). As a daily user of Heroku professionally - it's clear Heroku isn't a priority for Salesforce. Heroku has struggled to maintain any form of product development and, if anything, has become more unreliable over the last year or two.
As an add-on developer, my communication with Heroku has been fantastic. You can assume so because it's a direct revenue stream and feature expander - but my experience with other platforms isn't (iOS has slow/poor communication and docs, Chrome's extension support is non-existent and often not backwards compatible etc.). It's kind of re-ignited my love affair with Heroku, like it was pre-salesforce.
Overall I can't see us moving from Heroku unless costs demand us to - it's just too 'easy' to stay. Vendor lock in is real and I'm okay admitting that.
These platforms enable horizontal scaling, but ironically, many apps only need to scale horizontally because the base instances offer such underwhelming specs.
It's frustrating to see engineering teams in 2024 spending countless hours optimizing their applications to run on what essentially amounts to 2008-era hardware configurations.
Worked on serverless computing for several years at a major cloud provider and think I could offer a major reason for this: many customers (and us; spent the majority of my time working on projects motivated by a desire to increase throughput per infra $) found it challenging to implement software capable of reliably handling high levels of concurrency.
Start from the premise that the majority of developers out there have no experience dealing with concurrency, then consider that changing something's approach to concurrency/parallelism can occupy several developers' time for months or possibly years. Then consider that for a business the absolute costs of using 2-20x more instances than "necessary" may not justify the investment to actually switch to that. Then it makes sense why people so often choose to use concurrency = 1 (in which case you basically want the smallest instance size capable of running your app performantly) or whatever concurrency setting they've been using for ages, even if there are theoretical cost savings.
This is one reason AWS Fargate is so good. No, it isn’t developer friendly, but it gives you performance that actually matches the CPU allocation while still abstracting the VMs.
It doesn’t, actually, because you have zero guarantees on what generation of hardware you’ll end up on from deployment to deployment. Haswell this time, Ice Lake next time. What fun!
It sort of seems to me that Heroku stopped development / experiments after getting a bunch of traction. They were spot on solving a specific problem, early, and as time went on, said problem became a non-issue?
For a long time, they operated more or less like an independent org after the Salesforce acquisition. Eventually they’ve been pulled into the ecosystem though, and now it has the on-brand feel: 10+ second page loads, weird breakage like it taking two tries to click the SSO links to open addons like New Relic, needing to re-auth too often. It’s a shame.
This is a great article but one thing it doesn’t discuss is the importance of who the underlying cloud provider is. Many companies are pretty locked into AWS for better and for worse, regardless of whether they use things like lambda that are known for lock-in. Just the fact that you can use crunchydata, redis.com, and heroku is a reflection of being on AWS under the hood. Moving to something like fly.io or railway means introducing internet egress between your services.
This is basically our exact experience at readwise.io too. Originally, everything was through Heroku.
We started by moving our heroku redis and postgres to redislabs and crunchy respectively, which were 10x+ better in terms of support, reliability, performance, etc. Then our random addons.
We recently moved our background job workers (which were a majority of our Heroku hosting cost) to render.com with ~0 downtime. Render has been great.
We now just have our web servers running on Heroku (which we'll probably move to Render next year too)...
End of an era. Grateful for Heroku and the next generation of startups spawned by its slow decline :)
Moved everything Ruby to Fly.io a while back, can recommend
A few annoyances (like CLI auto-updating and rebuilding Go etc. when want to deploy a fast fix) but overall very solid
Also Render have been useful for running scripts
The vertical DBaaS are great for early phases but, generalising, seem to have pricing models tuned to punish any success (such as storage overage fees even if compute is low) -- also sneaky tier configs where lower tiers don't offer important features most need during prototype/dev phase forcing dedicated instances even though no volume hitting
i dunno. moving off heroku to another provider marking up aws or pretending to seems counterproductive.
a go binary in a zip builds and uploads to lambda in 1 second. handle routing and everything else in binary, don’t use aws features. you don’t need em.
lambda, s3, and dynamo all scale to zero with usage based billing.
toss in rds if you really miss sql or dislike scale to zero.
once this is too expensive, move to ovh metal, which sells significantly faster compute, ie epyc 4244p ddr5 5200.
more importantly, they sell bandwidth for $0.50/TB instead of $0.10/GB, which is aws price before paas wrapper markup.
the ovh price is after the 1Gbps unmetered you get for free with any server.
most companies will never even need metal, and between lambda and metal is ec2 spot, which might be of use anyway for elasticity.
ovh metal bills monthly, ec2 spot bills by the second. recently i learned ec2 spot in localzones is x2-3 cheaper than standard spot. i only use r5.xlarge in los angeles these days.
ovh metal has an old fashioned but perfectly workable api. aws has a great api.
spend a few days figuring out a decent sdlc, and then freeze it permanently. it will grow insanely robust over time.
i haven’t published my ovh metal workflows. my aws workflows are here[1].
lower overhead means more interesting products coming to market. lower friction definitely doesn’t hurt. viva le renaissance!
API development on lambda has to be the worst devex I’ve ever had. Not sure if it was cuz we were also using Dynamo, a half-baked JS framework, or cuz we had wacky internal requirements pushed down (one ex: live-live multi-region deployments for our 0 users!)
Maybe you’ve figured it out, but the local dev flow seemed pretty hacky/nonexistent. It also got expensive with real traffic
Exactly, I don't really see the point of migrating off Heroku unless you have scaled beyond it (in which case you are successful anyway) or are simply chasing distractions.
For us, Heroku allows us to focus on the product and simply ship features which brings revenue and keeps everyone happy; it may not be sexy right now but it sure as heck is mature and stable with lots of integrations.
Salesforce might eventually end up completely dismantling it but I'm hoping by that time other players can catch up.
I'm so happy we got started back before cloud services were common, so just grew with dedicated servers all along. A couple of times I've tried to price out what we'd pay for similar infrastructure from a cloud provider and the difference is insane. Plus it could never actually offer the same performance we have with servers in the same rack, containing fast CPUs and arrays of gen4 ssds in the same boxes, etc.
Of course it helps that we've grown very gradually over many years, so we don't need to scale rapidly; we can just over-provision by a few times to handle the spikes we do get, and work out tuning and upgrades each time we brush up against bottlenecks. So I'm sure it wouldn't work for everyone. But I bet there are still a lot of startups that would do well to just lease a dedicated box or two.
> Yes, every second Friday there was mild panic as all the errors and alarms went off when Heroku took the database offline and did whatever they needed to do, then restarted the app. This whole process took about 10 minutes
The heck? Like sure, people may call me "too perfect", but 20 minutes of outage for a Postgres database or a Redis instance / month is entirely not acceptable? Crossing out the less professional words there.
We're not particularly ambitious at work at guaranteeing 99.5 SLA to our customers, but 20 minutes of outage / month is already 99.5%. Availability only goes down if your database just has that. We observe that much downtime on a postgres cluster in a year.
20 minutes is 99.95% monthly uptime. That’s what Google Cloud SQL promises you. 99.5% is 24 h/d * 60 min/h * 30 d/mon * 0.01 * 0.05 which is a little over 20 min.
Google CloudRun and using Firestore or BigQuery and Storage is often cheaper and faster and easier. I spend $4/month to run https://web3dsurvey.com because it is pay per usage. I have others that cost $2 and less than $1 per month. The pay for cpu used is an awesome model.
Ops person here, by moving Redis/PostGres to Redis Inc./CrunchyData, does that mean your queries are running over internet? What are security/response time implications of that? I can already see my InfoSec person going "YOU PUT WHAT ON THE INTERNET?"
From an infosec perspective, as long as the queries are encrypted (with proper TLS verification), that angle is covered (though there are other considerations about data sovereignity etc.).
In terms of response time, that's something you'd need to benchmark for your application - though, given most DBaaSes run in the same major cloud providers are your application, it'll either be the same region or go cross-region via their private backbone, so the latency delta should be manageable. Of course if your app is particularly latency-sensitive for the DB that won't work.
If you’re using TLS what is the concern? I’d be more worried about Internet data transfer costs than that. Latency might be a concern but it’s going to be very dependent on use case.
It's not encryption but the fact database could be siphoned off by just stealing the credentials and possibly getting one of your IPs whitelisted. If it's inside the network, they have to establish a bridgehead and maintain it which is in theory, more difficult and higher risk of detection.
From latency perspective, not really. As someone who used redislabs at a previous company, the requests got routed through the Private Network proxy (whatever AWS calls it) which minimizes any networking overhead.
This is why I try to stick to compute primitives for lack of better word.
If you’ve building on docker for compute, something s3 compatible for object store and say something that is line compatible with Postgres or Redis then you’ve got clear boundaries and industry standard tech.
Stuff like that you can move fairly easily. The second you embrace something vendor specific for core logic you’re locked in. Which implies doing a vendor AND refactor change simultaneously
We did a similar thing. Started all in on Heroku and then slowly moved database, redis, MQ and CICD off to dedicated providers over a 4 year period. Then we spent several months creating an architecture in a different cloud provider that we felt would be our next step in evolution and finally migrated our servers off Heroku.
I'm still a fan of Heroku and would highly recommend it to a brand new startup. But, after awhile, you start realizing the limitations of Heroku and you need to move on. The fact that your startup is still around and growing enough that you need to migrate off Heroku should be seen as a sign of success
I always feel a bit confused when seeing people discuss VC-funded hosting providers like Heroku, Vercel, Render, etc.
Many people remember moving off Heroku, but few seem to realize that the "new" providers are going to have the same period of increased costs, backlash, and settling in to just working with the big fish that can't or won't justify moving. So any discussion about how Vercel or Render or whoever is better just feels like missing the point.
The one thing I'll say is that a company like Vercel is definitely making a reasonable bet by trying to control the software as much as possible as well as the hardware. I find it unfortunate.
I think people don’t care about Herokus costs, because for a long time Heroku was basically the best setup for a certain kind of simple app, and had lots of wonderful goodies around it.
Every alternative seems to be pitching some different thing (the oddest to me is Fly with its edge computing stuff… I legit wonder how many projects at Fly go beyond like 2 machines let alone do all the fancy stuff), meanwhile “charge a bunch of people 100 bucks a month for 20 bucks of compute” seems to be where Heroku really thrived.
The service that I pay the most for is hosted Heroku and also happens to be the slowest (20-30 seconds for the main page to load) and crash prone so that I use.
I'm not sure why it is so slow. Is like to blame it on something... Heroku, Rails...
Heroku reminds me how the tools that helped us grow can eventually turn into limits we outgrow. In tech staying dynamic means keeping the freedom to adapt not just scale.
heroku is giving a simplified devex but a lot of it you can reproduce for a lot cheaper on aws. there is some heavy lifting to build the pipelines at first but then it would be pretty similar
This dovetails with my own experience more or less exactly: when I launched my company, it was easy to throw everything on Heroku, and to their credit and detriment it was also fairly easy to move pretty much everything _off_ of Heroku (RDS for a database, RedisLabs for, well, Redis, and so on.)
Back in 2021 (https://news.ycombinator.com/item?id=29648325, https://news.ycombinator.com/item?id=30177907) and 2022 (https://news.ycombinator.com/item?id=32608734) Heroku went from "well this is costing enough that it probably makes sense to divest at some point and save the $X00/mo" to "Heroku is now the biggest systemic risk to uptime that I have", and it felt _very_ high-priority to get off of them and onto someone else.
Two years later, though... the inclination has ebbed. Heroku hasn't shipped anything meaningful for my use case in the past 24 months, but also they have been fairly stable. I'm sure I will migrate off of them onto something else in the fullness of time, but it would take a pretty severe precipitating event.
I build an add-on for Heroku[0], have worked for a company on and off that's had all core services on Heroku for over 8 years, and I've put a lot of my side projects etc. on Heroku.
My experiences differ depending on the above, I've mostly use Render or an alternative for side projects now (just due to cost/forgettability). As a daily user of Heroku professionally - it's clear Heroku isn't a priority for Salesforce. Heroku has struggled to maintain any form of product development and, if anything, has become more unreliable over the last year or two.
As an add-on developer, my communication with Heroku has been fantastic. You can assume so because it's a direct revenue stream and feature expander - but my experience with other platforms isn't (iOS has slow/poor communication and docs, Chrome's extension support is non-existent and often not backwards compatible etc.). It's kind of re-ignited my love affair with Heroku, like it was pre-salesforce.
Overall I can't see us moving from Heroku unless costs demand us to - it's just too 'easy' to stay. Vendor lock in is real and I'm okay admitting that.
0 - https://elements.heroku.com/addons/eppalock
These platforms enable horizontal scaling, but ironically, many apps only need to scale horizontally because the base instances offer such underwhelming specs.
It's frustrating to see engineering teams in 2024 spending countless hours optimizing their applications to run on what essentially amounts to 2008-era hardware configurations.
Worked on serverless computing for several years at a major cloud provider and think I could offer a major reason for this: many customers (and us; spent the majority of my time working on projects motivated by a desire to increase throughput per infra $) found it challenging to implement software capable of reliably handling high levels of concurrency.
Start from the premise that the majority of developers out there have no experience dealing with concurrency, then consider that changing something's approach to concurrency/parallelism can occupy several developers' time for months or possibly years. Then consider that for a business the absolute costs of using 2-20x more instances than "necessary" may not justify the investment to actually switch to that. Then it makes sense why people so often choose to use concurrency = 1 (in which case you basically want the smallest instance size capable of running your app performantly) or whatever concurrency setting they've been using for ages, even if there are theoretical cost savings.
Now you have both a concurrency issue and a distributed systems problem...
This is one reason AWS Fargate is so good. No, it isn’t developer friendly, but it gives you performance that actually matches the CPU allocation while still abstracting the VMs.
It doesn’t, actually, because you have zero guarantees on what generation of hardware you’ll end up on from deployment to deployment. Haswell this time, Ice Lake next time. What fun!
We migrated from Heroku to AWS ECS and Fargate.
It's not hard, and it provides a nice managed service without the full complexity of running K8s.
I hadn't needed to use a memory profiler once for 15+ years until using Heroku
And haven't used one again since moving off ...
It sort of seems to me that Heroku stopped development / experiments after getting a bunch of traction. They were spot on solving a specific problem, early, and as time went on, said problem became a non-issue?
For a long time, they operated more or less like an independent org after the Salesforce acquisition. Eventually they’ve been pulled into the ecosystem though, and now it has the on-brand feel: 10+ second page loads, weird breakage like it taking two tries to click the SSO links to open addons like New Relic, needing to re-auth too often. It’s a shame.
A lot of the key talent also moved on.
What used to be the "Oracle Effect" has now become the "Salesforce Effect".
Salesforce bought Heroku in 2011. Also every cloud provider has an in-house "good enough" PaaS at this point.
This is a great article but one thing it doesn’t discuss is the importance of who the underlying cloud provider is. Many companies are pretty locked into AWS for better and for worse, regardless of whether they use things like lambda that are known for lock-in. Just the fact that you can use crunchydata, redis.com, and heroku is a reflection of being on AWS under the hood. Moving to something like fly.io or railway means introducing internet egress between your services.
This is basically our exact experience at readwise.io too. Originally, everything was through Heroku.
We started by moving our heroku redis and postgres to redislabs and crunchy respectively, which were 10x+ better in terms of support, reliability, performance, etc. Then our random addons.
We recently moved our background job workers (which were a majority of our Heroku hosting cost) to render.com with ~0 downtime. Render has been great.
We now just have our web servers running on Heroku (which we'll probably move to Render next year too)...
End of an era. Grateful for Heroku and the next generation of startups spawned by its slow decline :)
Moved everything Ruby to Fly.io a while back, can recommend
A few annoyances (like CLI auto-updating and rebuilding Go etc. when want to deploy a fast fix) but overall very solid
Also Render have been useful for running scripts
The vertical DBaaS are great for early phases but, generalising, seem to have pricing models tuned to punish any success (such as storage overage fees even if compute is low) -- also sneaky tier configs where lower tiers don't offer important features most need during prototype/dev phase forcing dedicated instances even though no volume hitting
i dunno. moving off heroku to another provider marking up aws or pretending to seems counterproductive.
a go binary in a zip builds and uploads to lambda in 1 second. handle routing and everything else in binary, don’t use aws features. you don’t need em.
lambda, s3, and dynamo all scale to zero with usage based billing.
toss in rds if you really miss sql or dislike scale to zero.
once this is too expensive, move to ovh metal, which sells significantly faster compute, ie epyc 4244p ddr5 5200.
more importantly, they sell bandwidth for $0.50/TB instead of $0.10/GB, which is aws price before paas wrapper markup.
the ovh price is after the 1Gbps unmetered you get for free with any server.
most companies will never even need metal, and between lambda and metal is ec2 spot, which might be of use anyway for elasticity.
ovh metal bills monthly, ec2 spot bills by the second. recently i learned ec2 spot in localzones is x2-3 cheaper than standard spot. i only use r5.xlarge in los angeles these days.
ovh metal has an old fashioned but perfectly workable api. aws has a great api.
spend a few days figuring out a decent sdlc, and then freeze it permanently. it will grow insanely robust over time.
i haven’t published my ovh metal workflows. my aws workflows are here[1].
lower overhead means more interesting products coming to market. lower friction definitely doesn’t hurt. viva le renaissance!
1. https://github.com/nathants/libaws
API development on lambda has to be the worst devex I’ve ever had. Not sure if it was cuz we were also using Dynamo, a half-baked JS framework, or cuz we had wacky internal requirements pushed down (one ex: live-live multi-region deployments for our 0 users!)
Maybe you’ve figured it out, but the local dev flow seemed pretty hacky/nonexistent. It also got expensive with real traffic
single binary go workflow is great. net http. conditionals for routes. live updates in 1 second. no frameworks needed.
Thats cool - I’ll give it a shot with Go after I get over my PTSD.
good luck!
only thing worse than dysfunctional companies is dysfunctional technology.
someday we’ll get the incentives aligned properly, and thrive.
Exactly, I don't really see the point of migrating off Heroku unless you have scaled beyond it (in which case you are successful anyway) or are simply chasing distractions.
For us, Heroku allows us to focus on the product and simply ship features which brings revenue and keeps everyone happy; it may not be sexy right now but it sure as heck is mature and stable with lots of integrations.
Salesforce might eventually end up completely dismantling it but I'm hoping by that time other players can catch up.
i haven’t used heroku. i assume it and all other x% aws-markup paas feel about the same.
my point was that it’s not really needed.
if you’re migrating off, you already have an sdlc that you like and want to preserve from provider degradation over time.
simplify it a bit, and encode it directly into aws or ovh. then it’s permanent, and grows insanely robust over time.
sdlc doesn’t not need to be constantly evolving. evolve it for the next greenfield product.
I'm so happy we got started back before cloud services were common, so just grew with dedicated servers all along. A couple of times I've tried to price out what we'd pay for similar infrastructure from a cloud provider and the difference is insane. Plus it could never actually offer the same performance we have with servers in the same rack, containing fast CPUs and arrays of gen4 ssds in the same boxes, etc.
Of course it helps that we've grown very gradually over many years, so we don't need to scale rapidly; we can just over-provision by a few times to handle the spikes we do get, and work out tuning and upgrades each time we brush up against bottlenecks. So I'm sure it wouldn't work for everyone. But I bet there are still a lot of startups that would do well to just lease a dedicated box or two.
> Yes, every second Friday there was mild panic as all the errors and alarms went off when Heroku took the database offline and did whatever they needed to do, then restarted the app. This whole process took about 10 minutes
The heck? Like sure, people may call me "too perfect", but 20 minutes of outage for a Postgres database or a Redis instance / month is entirely not acceptable? Crossing out the less professional words there.
We're not particularly ambitious at work at guaranteeing 99.5 SLA to our customers, but 20 minutes of outage / month is already 99.5%. Availability only goes down if your database just has that. We observe that much downtime on a postgres cluster in a year.
20 minutes is 99.95% monthly uptime. That’s what Google Cloud SQL promises you. 99.5% is 24 h/d * 60 min/h * 30 d/mon * 0.01 * 0.05 which is a little over 20 min.
At work you’re committing to 3 h / month.
Ah, I mis-remembered / brain farted a 9 in there.
Yet, a promise of 99.95% or 20 minutes of downtime, or having 20 minutes of interruptions and downtime / month are still a wild difference.
Google CloudRun and using Firestore or BigQuery and Storage is often cheaper and faster and easier. I spend $4/month to run https://web3dsurvey.com because it is pay per usage. I have others that cost $2 and less than $1 per month. The pay for cpu used is an awesome model.
The equivalent AWS is probably similar in price.
Ops person here, by moving Redis/PostGres to Redis Inc./CrunchyData, does that mean your queries are running over internet? What are security/response time implications of that? I can already see my InfoSec person going "YOU PUT WHAT ON THE INTERNET?"
From an infosec perspective, as long as the queries are encrypted (with proper TLS verification), that angle is covered (though there are other considerations about data sovereignity etc.).
In terms of response time, that's something you'd need to benchmark for your application - though, given most DBaaSes run in the same major cloud providers are your application, it'll either be the same region or go cross-region via their private backbone, so the latency delta should be manageable. Of course if your app is particularly latency-sensitive for the DB that won't work.
If you’re using TLS what is the concern? I’d be more worried about Internet data transfer costs than that. Latency might be a concern but it’s going to be very dependent on use case.
It's not encryption but the fact database could be siphoned off by just stealing the credentials and possibly getting one of your IPs whitelisted. If it's inside the network, they have to establish a bridgehead and maintain it which is in theory, more difficult and higher risk of detection.
From latency perspective, not really. As someone who used redislabs at a previous company, the requests got routed through the Private Network proxy (whatever AWS calls it) which minimizes any networking overhead.
I’ve seen entirely AWS workloads that still routed traffic out to the internet and back in.
Devs running infrastructure is always an interesting experience.
pretty hard to miss this in surprise billing.
You might be surprised at what companies will let slip through when the AWS bill is in the millions $/month.
true. congress should do something about egress gouging, it’s insane.
This is why I try to stick to compute primitives for lack of better word.
If you’ve building on docker for compute, something s3 compatible for object store and say something that is line compatible with Postgres or Redis then you’ve got clear boundaries and industry standard tech.
Stuff like that you can move fairly easily. The second you embrace something vendor specific for core logic you’re locked in. Which implies doing a vendor AND refactor change simultaneously
We did a similar thing. Started all in on Heroku and then slowly moved database, redis, MQ and CICD off to dedicated providers over a 4 year period. Then we spent several months creating an architecture in a different cloud provider that we felt would be our next step in evolution and finally migrated our servers off Heroku.
I'm still a fan of Heroku and would highly recommend it to a brand new startup. But, after awhile, you start realizing the limitations of Heroku and you need to move on. The fact that your startup is still around and growing enough that you need to migrate off Heroku should be seen as a sign of success
I always feel a bit confused when seeing people discuss VC-funded hosting providers like Heroku, Vercel, Render, etc.
Many people remember moving off Heroku, but few seem to realize that the "new" providers are going to have the same period of increased costs, backlash, and settling in to just working with the big fish that can't or won't justify moving. So any discussion about how Vercel or Render or whoever is better just feels like missing the point.
The one thing I'll say is that a company like Vercel is definitely making a reasonable bet by trying to control the software as much as possible as well as the hardware. I find it unfortunate.
I think people don’t care about Herokus costs, because for a long time Heroku was basically the best setup for a certain kind of simple app, and had lots of wonderful goodies around it.
Every alternative seems to be pitching some different thing (the oddest to me is Fly with its edge computing stuff… I legit wonder how many projects at Fly go beyond like 2 machines let alone do all the fancy stuff), meanwhile “charge a bunch of people 100 bucks a month for 20 bucks of compute” seems to be where Heroku really thrived.
The problem with a "20k bar" is that the team will use that as a watermark for all cost savings. Groups of people tend to act within consensus.
The engineers are making a dozen cost tradeoffs a day. You want to instill cost savings in every decision.
Look how long this team suffered on a legacy platform thanks to a perfectly rational approach.
The service that I pay the most for is hosted Heroku and also happens to be the slowest (20-30 seconds for the main page to load) and crash prone so that I use.
I'm not sure why it is so slow. Is like to blame it on something... Heroku, Rails...
> (20-30 seconds for the main page to load)
I’m just throwing this out there but that may be something you want to get to the bottom of…
Having your database provider take your db offline for 10 minutes every 2 weeks is insanity. I'd be switching providers the next week...
Heroku reminds me how the tools that helped us grow can eventually turn into limits we outgrow. In tech staying dynamic means keeping the freedom to adapt not just scale.
Which is also part of the appeal of the Heroku approach. 12 factor makes migrating relatively easy.
heroku is giving a simplified devex but a lot of it you can reproduce for a lot cheaper on aws. there is some heavy lifting to build the pipelines at first but then it would be pretty similar
the devex is really good on cloud platforms now. What could compel teams to use heroku today? Is it really the convenience of "git push to deploy"?
I think yes it is. For people who are new to IaaS there is some work needed for git Push to deploy
Mailgun on the same path, exactly.