I've been a full-time FEM Analyst for 15 years now. It's generally a nice article, though in my opinion paints a far rosier picture of the last couple decades than is warranted.
Actual, practical use of FEM has been stagnate for quite some time. There have been some nice stability improvements to the numerical algorithms that make highly nonlinear problems a little easier; solvers are more optimized; and hardware is of course dramatically more capable (flash storage has been a godsend).
Basically every advanced/"next generation" thing the article touts has fallen flat on its face when applied to real problems. They have some nice results on the world's simplest "laboratory" problem, but accuracy is abysmal on most real-world problems - e.g. it might give good results on a cylinder in simple tension, but fails horribly when adding bending.
There's still nothing better, but looking back I'm pretty surprised I'm still basically doing things the same way I was as an Engineer 1; and not for lack of trying. I've been on countless development projects that seem promising but just won't validate in the real world.
Industry focus has been far more on Verification and Validation (ASME V&V 10/20/40) which has done a lot to point out the various pitfalls and limitations. Academic research and the software vendors haven't been particularly keen to revisit the supposedly "solved" problems we're finding.
I'm a mechanical engineer, and I've been wanting to better understand the computational side of the tools I use every day. Do you have any recommendations for learning resources if one wanted to "relearn" FEA from a computer science perspective?
stagnate last 15 years???
Contact elements, bolt preload, modeling individual composite fibers, delamination progressive ply failure, modeling layers of material to a few thousandths of an inch. Design optimization.
ANSYS Workbench = FEA For Dummies.
The list goes on.
I kind of thought Neural Operators were slotting into the some problem domains where FEM is used (based on recent work in weather modelling, cloth modelling, etc) and thought there was some sort of FEM -> NO lineage. Did I completely misunderstand that whole thing?
Those are definitely up next in the flashy-new-thing pipeline and I'm not that up to speed on them yet.
Another group within my company is evaluating them right now and the early results seems to be "not very accurate, but directionally correct and very fast" so there may be some value in non-FEM experts using them to quickly tell if A or B is a better design; but will still need a more proper analysis in more accurate tools.
It's still early though and we're just starting to see the first non-research solvers hitting the market.
I was under the impression that the linear systems that come out of FEM methods are in some cases being solved by neural networks (or partially, e.g. as a preconditioner in an iterative scheme), but I don't know the details.
Could you write a blogpost-style article on how to model the shallow water wave equation on a sphere? The article would start with the simplest possible method, something that could be implemented in short C program, and would continue with a progressively more accurate and complex methods.
If you are interested in this, I'd recommend following an openfoam tutorial, c++ though.
You could do SWE with finite elements, but generally finite volumes would be your choice to handle any potential discontinuities and is more stable and accurate for practical problems.
I'm looking for something like this, but more advanced. The common problem with such tutorials is that they stop with the simplest geometry (square) and the simplest finite difference method.
What's unclear to me is how do I model the spherical geometry without exploding the complexity of the solution. I know that a fully custom mesh with a pile of formulas for something like beltrami-laplace operator would work, but I want something more elegant than this. For a example, can I use the Fibbonacci spiral to generate a uniform spherical mesh, and then somehow compute gradients and the laplacian?
I suspect that the stability of FE or FV methods is rooted in the fact that the FE functions slightly overlap, so computing the next step is a lot like using an implicit FD scheme, or better, a variation of the compact FD scheme. However I'm interested in how an adept in the field would solve this problem in practice. Again, I'm aware that there are methods of solving such systems (Jacobi, etc.), but those make the solution 10x more complex, buggier and slower.
At least those had some basic politeness. So often I'm blown away not only how people blithely write "I NEED HELP, GIMME XYZ NOW NERDS" but especially how everyone is just falling over themselves to actually help! WTF?
Basic politeness is absolutely dead, nobody has any concept of acknowledging they are asking for a favour; we just blast Instagram/TikTok reels at top volume and smoke next to children and elderly in packed public spaces etc. I'm 100% sure it's not rose-tinted memories of the 90s making me think, it wasn't always like this...
I started my career doing FE modeling and analysis with ANSYS and NASTRAN. Sometimes I miss these days. Thinking about how to simplify a real world problem so far that it is solvable with the computational means available was always fun. Then pushing quads around for hours until the mesh was good had an almost meditative effect. But I don't feel overwhelmingly eager to learn a new software or language.
Much to my surprise, it seems there hasn't been much movement there. ANSYS still seems to be the leader for general simulation and multi-physics. NASTRAN still popular. Still no viable open-source solution.
The only new player seems to be COMSOL. Has anyone experience with it? Would it be worth a try for someone who knows ANSYS and NASTRAN well?
I've used ansys daily for over a decade, and the only movement is in how they name their license tiers. It's a slow muddy death march. Every year I'm fighting the software more and more, the sales men are clearly at the wheel.
They buy "vertical aligned" software, integrate it, then slowly let it die. They just announced they're killing off one of these next year, that they bought ten years ago, because they want to push a competitive product with 20% of the features.
I've been using nastran for half as long but it isn't much better. It's all sales.
I dabbed a bit in abaqus, that seems nice. Probably cause I just dabbed in it.
But here I'm just trying to do my work, and all these companies do is move capabilities around their license tiers and boil the frog as fast as they get away with.
I've gone Abaqus > Ansys > Abaqus/LS-DYNA over my career and hate Ansys with a fiery passion. It's the easiest one to run your first model in, but when you start applying it to real problems its a fully adversarial relationship. The fact you have to make a complete copy of the geometry/mesh to a new Workbench "block" to run a slightly different load case (and you can't read in an orphaned results files) is just horrible.
Abaqus is more difficult to get up to speed in, but its really nice from an advanced usability standpoint. They struggle due to cost though, it is hugely expensive and we've had to fight hard to keep it time and time again.
LS-Dyna is similar to Abaqus (though I'm not fully up in it yet), but we're all just waiting to see how Ansys ruins it, especially now that they got bought out by Synopsys.
I don't know how long ago you used ansys, and i definitely don't want to sell it, but you can share geometry/mesh between those "blocks" (by dragging blocks on top of each other), and you can read in result orphaned result files.
For the more low-level stuff there's the FEniCS project[1], for solving PDEs using fairly straight forward Python code like this[2]. When I say fairly straight forward, I mean it follows the math pretty closely, it's not exactly high-school level stuff.
Interesting. Please bear with me as this is going off 25 year old memories, but my memory is that the workflow for using FEA tools was: Model in some 3D modelling engineering tool (e.g. SolidWorks), ansys to run FEA, iterate if needed, prototype, iterate.
So to have anything useful, you need that entire pipeline? For hobbyists, I assume we need this stack. What are the popular modelling tools?
To get started with Fenics you can maybe use the FEATool GUI, which makes it easier to set up FEA models, and also export Python simulation scripts to learn or modify the Fenics syntax [1].
Yeah not my domain so wouldn't really know. For FEniCS I know Gmsh[1] was used. There's some work[2][3] been done to integrate FEniCS with FreeCAD. It seems FreeCAD also supports[4] other FEM solvers.
But, I guess you get what you pay for in this space still.
FEniCs is mostly used by academic researchers, I used it for FEM modelling in magnetic for e.g. where the sorts of problems we wanted to solve you can’t do in a commercial package.
Electronics (when you start to care about EMI or antenna design), model airplanes (for aerodynamics), rocketry, machining (especially if you want to get into SPIF), robotics, 3-D printing (especially for topology optimization), basically anything that deals with designing solid structures in the physical world. Also, computer graphics, including video games.
Unfortunately the barrier to entry is too high for most hobbyists in these fields to use FEM right now.
There are some obvious downsides and exceptions to this sentiment, but on balance, I really appreciate how the expansive access to information via the internet has fostered this phenomenon: where an unremarkable fella with a dusty media studies degree, a well-equipped garage, and probably too much free time can engineer and construct robotic machines, implement/tweak machine vision mechanisms, microwave radio transceivers, nanometer-scale measurements using laser diodes and optical interferometry, deep-sky astrophotography, etc., etc..
Of course, with burgeoning curiosity and expanding access to surplus university science lab equipment, comes armchair experts and the potential for insufferability[0]. It’s crucial to maintain perspective and be mindful of just how little any one person (especially a person with a media studies degree) can possibly know.
[0] I’m pretty sure “insufferability” isn’t a real word. [Edit: don’t use an asterisk for footnotes.]
> comes armchair experts and the potential for insufferability
Hey, I resemble that remark! I'd be maybe a little less armchair with more surplus equipment access, but maybe no less insufferable.
By all accounts, though, a degree of insufferability is no bar to doing worthwhile work; Socrates, Galileo, Newton, Babbage, and Heaviside were all apparently quite insufferable, perhaps as much so as that homeless guy who yells at you about adrenochrome when you walk by his park encampment. (Don't fall into the trap of thinking it's an advantage, though.) Getting sidetracked by trivialities and delusions is a greater risk. Most people spend their whole lives on it.
As for how little any person can know, you can certainly know more than anyone who lived a century ago: more than Einstein, more than Edison, more than Noether, more than Tesla, more than Gauss. Any one of the hobbies you named will put you in contact with information they never had, and you can draw on a century or more of academic literature they didn't have, thanks to Libgen and Sci-Hub (and thus Bitcoin).
And it's easy to know more than an average doctorate holder; all you have to do is study, but not forget everything you study the way university students do, and not fall into traps like ancient aliens and the like. I mean, you can still do good work if you believe in ancient aliens (Newton and Tesla certainly believed dumber things) but probably not good archeological work.
Don't be discouraged by prejudice against autodidacts. Lagrange, Heaviside, and du Châtelet were autodidacts, and Ptolemy seems to have been as well. And they didn't even have Wikipedia or Debian! Nobody gets a Nobel for passing a lot of exams.
IMO, the mathematics underlying finite element methods and related subjects — finite element exterior calculus comes immediately to mind — are interesting enough to constitute a hobby in their own right.
COMSOL's big advantage is it ties together a lot of different physics regimes together and makes it very easy to couple different physics together. Want to do coupled structures/fluid? Or coupled electromagnetism/mechanical? Its probably the easiest one to use.
Each individual physics regime is not particularly good on its own - there are far better mechanical, CFD, electromagnetism, etc solvers out there - but they're all made by different vendors and don't play nicely with each other.
I am hoping this open source FEM library will catch on : https://www.dealii.org/. The deal in deal.II stands for Differential Equation Analysis Library.
It's written in C++, makes heavy use of templates and been in development since 2000. It's not meant for solid mechanics or fluid mechanics specifically, but for FEM solutions of general PDEs.
The documentation is vast, the examples are numerous and the library interfaces with other libraries like Petsc, Trilinos etc. You can output results to a variety of formats.
I believe support for triangle and tetrahedral elements has been added only recently. In spite of this, one quirk of the library is that meshes are called "triangulations".
I've worked with COMSOL (I have a smaller amount of ANSYS experience to compare to). For the most part I preferred COMSOL's UI and workflow and leveraged a lot of COMSOL's scripting capabilities which was handy for a big but procedural geometry I had (I don't know ANSYS's capabilities for that). They of course largely do the same stuff. If you have easy access to COMSOL to try it out I'd recommend it just for the experience. I've found sometimes working with other tools make me recognize some capabilities or technique that hadn't clicked for me yet.
OpenFOAM seems like an opensource option but I have found it rather impenetrable - there are some youtube videos and pdf tutorials, but they are quite dense and specific and doens't seem to cover the entire pipeline
Wait? What? NASTRAN was originally developed by NASA and open sourced over two decades ago. Is this commercial software built on top that is closed source?
I’m astonished ANSYS and NASTRAN are still the only players in town. I remember using NASTRAN 20 years ago for FE of structures while doing aero engineering. And even then NASTRAN was almost 40 years old and ancient.
There's a bunch of open source fem solvers e.g. Calculix, Code_Aster, OpenRadioss and probably a few unmaintained forks of (NASA) NASTRAN, but there's no multiphysics package I don't think.
These are at least capable of thermomechanical with fluid-structure coupling. Not all-physics but still multi. True that things with multi species diffusion or electromagnetics are missing, but maybe Elmer can fill the gap.
Ouch. I kind of know Comsol because it was already taught in my engineering school 15 years ago, so that it still counts as a “new entrant” really gives an idea of how slow the field evolves.
I work in this field and it really is stagnant and dominated by high-priced Ansys/etc. For some reason silicon valley's open sourceness hasn't touched it. For open source, there's CalculiX which is full of bugs and Code Aster which everybody I've heard about it from say it's too confusing to use. CalculiX has Prepomax as a fairly new and popular pre/post.
Once you have a mesh that's "good enough", you can use any number of numeric solvers. COMSOL has a very good mesher, and a competent geometry editor. It's scriptable, and their solvers are also very good.
There might be better programs for some problems, but COMSOL is quite nice.
During my industrial PhD, I created an Object-Oriented Programming (OOP) framework for Large Scale Air-Pollution (LSAP) simulations.
The OOP framework I created was based on Petrov-Galerkin FEM. (Both proper 2D and "layered" 3D.)
Before my PhD work, the people I worked with (worked for) used spectral methods and Alternate-direction FEM (i.e. using 1D to approximate 2D.)
In some conferences and interviews certain scientists would tell me that programming FEM is easy (for LSAP.) I always kind of agree and ask how many times they have done it. (For LSAP or anything else.)
I was not getting an answer from those scientists...
Applying FEM to real-life problems can involve the resolving of quite a lot of "little" practical and theoretical gotchas, bugs, etc.
> Applying FEM to real-life problems can involve the resolving of quite a lot of "little" practical and theoretical gotchas, bugs, etc.
FEM at it's core ends up being just a technique to find approximate solutions to problems expressed with partial differential equations.
Finding solutions to practical problems that meet both boundary conditions and domain is practically impossible to have with analytical methods. FEM trades off correctness with an approximation that can be exact in prescribed boundary conditions but is an approximation in both how domains are expressed and the solution,and has nice properties such as the approximation errors converging to the exact solution by refining the approximation. This means exponentially larger computational budgets.
I also studied FEM in undergrad and grad school. There's something very satisfying about breaking an intractably difficult real-world problem up into finite chunks of simplified, simulated reality and getting a useful, albeit explicitly imperfect, answer out of the other end. I find myself thinking about this approach often.
Predicting how things evolve in space-time is a fundamental need. Finite element methods deserve the glory of a place at the top of the HN list. I opted for "orthogonal collocation" as the method of choice for my model back in the day because it was faster and more fitting to the problem at hand. A couple of my fellow researchers did use FEM. It was all the rage in the 90s for sure.
Interesting perspective. I just attended an academic conference on isogeometric analysis (IGA), which is briefly mentioned in this article. Tom Hughes, who is mentioned several times, is now the de facto leader of the IGA research community. IGA has a lot of potential to solve many of the pain points of FEM. It has better convergence rates in general, allows for better timesteps in explicit solvers, has better methods to ensure stability in, e.g., incompressible solids, and perhaps most exciting, enables an immersed approach, where the problem of meshing is all but gone as the geometry is just immersed in a background grid that is easy to mesh. There is still a lot to be done to drive adoption in industry, but this is likely the future of FEM.
> IGA has a lot of potential to solve many of the pain points of FEM.
Isn't IGA's shtick just replacing classical shape functions with the splines used to specify the geometry?
If I recall correctly convergence rates are exactly the same, but the whole approach fails to realize that, other than boundaries, geometry and the fields of quantities of interest do not have the same spatial distributions.
IGA has been around for ages, and never materialized beyond the "let's reuse the CAD functions" trick, which ends up making the problem more complex without any tangible return when compared with plain old P-refinent. What is left in terms of potential?
> Tom Hughes, who is mentioned several times, is now the de facto leader of the IGA research community.
I recall the name Tom Hughes. I have his FEM book and he's been for years (decades) the only one pushing the concept. The reason being that the whole computational mechanics community looked at it,found it interesting, but ultimately wasn't worth the trouble. There are far more interesting and promising ideas in FEM than using splines to build elements.
> Isn't IGA's shtick just replacing classical shape functions with the splines used to specify the geometry?
That's how it started, yes. The splines used to specify the geometry are trimmed surfaces, and IGA has expanded from there to the use of splines generally as the shape functions, as well as trimming of volumes, etc. This use of smooth splines as shape functions improves the accuracy per degree of freedom.
> If I recall correctly convergence rates are exactly the same
Okay, looks like I remembered wrong here. What we do definitely see is that in IGA you get the convergence rates of higher degrees without drastically increasing your degree of freedom, meaning that there is better accuracy per degree of freedom for any degree above 1. See for example Figures 16 and 18 in this paper: https://www.researchgate.net/profile/Laurens-Coox/publicatio...
> geometry and the fields of quantities of interest do not have the same spatial distributions.
Using the same shape functions doesn't automatically mean that they will have the same spatial distributions. In fact, with hierarchical refinement in splines you can refine the geometry and any single field of interest separately.
> What is left in terms of potential?
The biggest potential other than higher accuracy per degree of freedom is perhaps trimming. In FEM, trimming your shape functions makes the solution unusable. In IGA, you can immerse your model in a "brick" of smooth spline shape functions, trim off the region outside, and run the simulation while still getting optimal convergence properties. This effectively means little to no meshing required. For a company that is readying this for use in industry, take a look at https://coreform.com/ (disclosure, I used to be a software developer there).
I took a course in undergrad, and was exposed to it in grad school again, and for the life of me I still don't understand the derivations either Galerkin or variational.
I learned from the structural engineering perspective. What are you struggling with? In my mind I have this logic flow: 1. strong form pde; 2. weak form; 3. discretized weak form; 4. compute integrals (numerically) over each element; 5. assemble the linear system; 6. solve the linear system.
Luckily the integrals of step 4 are already worked out in text books and research papers for all the problems people commonly use FEA for so you can almost always skip 1. 2. and 3.
For anyone interested in a contemporary implementation, SELF is a spectral element library in object-oriented fortran [1]. The devs here at Fluid Numerics have upcoming benchmarks on our MI300A system and other cool hardware.
I have such a fondness for FEA. ANSYS and COSMOS were the ones I used, and I’ve written toy modelers and solvers (one for my HP 48g) and even tinkered with using GPUs for getting answers faster (back in the early 2000s).
Unfortunately my experience is that FEA is a blunt instrument with narrow practical applications. Where it’s needed, it is absolutely fantastic. Where it’s used when it isn’t needed, it’s quite the albatross.
My hot take is that, FEM is best used as unit testing of Machine Design, not a guide towards design that it’s often used as. The greatest mechanical engineer I know, once designed an entire mechanical wrist arm with five fingers, actuations, lots of parts and flexible finger tendon. He never used FEM at any part of his design. He instead did it in the old fashioned, design and fab a simple prototype, get a feel for it, use the tolerances you discovered in the next prototype and just keep iterating quickly. If I went to him and told him to model the flexor of his fingers in FEM, and then gave him a book to tell him how to correctly use the FEM software so that you got non “non-sensical” results I would have slowed him down if anything. Just build and you learn the tolerances, and the skill is in building many cheap prototypes to get the best idea of what the final expensive build will look like.
And with that you wrote the best reply to your own comment. Great programmers of the past wrote amazing systems just in assembly. But you needed to be a great programmer just to get anything done at all.
Nowadays dunces like me can write reasonable software in high level languages with plenty of libraries. That's progress.
Similar for mechanical engineering.
(Doing prototypes etc might still be a good idea, of course. My argument is mainly that what works for the best engineers doesn't necessarily work for the masses.)
This is true, although it was notable as an early application of Euler-Bernoulli beam theory in structural engineering, which helped to prove the usefulness of that method.
Would FEM be useful for that kind problem? It's more for figuring out if your structure will take the load, where stress concentrations are, what happens with thermal expansion. FEM won't do much for figuring out what the tolerance need to be on intricate mechanisms
Garbage in garbage out. If you don't fully understand the model, then small parameter changes can create wildly different results. It's always good to go back to fundamentals and hand check a simplification to get a feel for how it should behave.
Its wrong to assume that everyone and every projects can use an iterative method with endless prototypes. Id you do I have a prototype bridge to sell you.
The FEM is just a model of the crash resistant structure. Hopefully it will behave like the actual structure, but that is not guaranteed. We use the FEM because it is faster and cheaper than doing the tests on the actual thing. However if you have the time and money to do your crash resiliency tests on the actual product during the development phase. I expect the results would be much better.
Anyone can design a bridge that holds up. Romans did it millenia ago.
Engineering is designing a bridge that holds up to a certain load, with the least amount of material and/or cost. FEM gives you tighter bounds on that.
Having worked on the design of safety structures with mechanical engineers for a few projects, it is far, far cheaper to do a simulation and iterate over designs and situations than do that in a lab or work it out by hand. The type of stuff you can do on paper without FEM tends to be significantly oversimplified.
It doesn't replace things like actual tests, but it makes designing and understanding testing more efficient and more effective. It is also much easier to convince reviewers you've done your job correctly with them.
I'd argue computer simulation has been an important component a majority of mechanical engineering innovation in the last century. If you asked a mechanical engineer to ignore those tools in their job they'd (rightly) throw a fit. We did "just fine" without cars for the majority of humanity, but motorized vehicles significantly changed how we do things and changed the reach of what we can do.
What is your actual assertion? That tools like FEA are needless frippery or that they just dumb down practitioners who could have otherwise accomplished the same things with hand methods? Something else? You're replying to a practicing mechanical engineer whose experience rings true to this aerospace engineer.
Things like modern automotive structural safety or passenger aircraft safety are leagues better today than even as recently as the 1980s because engineers can perform many high-fidelity simulations long before they get to integrated system test. When integrated system test is so expensive, you're not going to explore a lot of new ideas that way.
The argument that computational tools are eroding deep engineering understanding is long-standing, and has aspects of both truth and falsity. Yep, they designed the SR-71 without FEA, but you would never do that today because for the same inflation-adjusted budget, we'd expect a lot more out of the design. Tools like FEA are what help engineers fulfill those expectations today.
That the original comment I replied to is false: "Good luck designing crash resilient structures without simulating it on FEM based software."
Now what's my opinion? FEM raises the quality floor of engineering output overall, and more rarely the ceiling. But, excessive reliance on computer simulation often incentivizes complex, fragile, and expensive designs.
> passenger aircraft safety are leagues better today
Yep, but that's just restating the pros. Local iteration and testing.
> You're replying to a practicing mechanical engineer
Oh drpossum and I are getting to know each other.
I agree with his main point. It's an essential tool for combatting certifications and reviews in the world of increasing regulatory and policy based governance.
Except that everything's gotten abysmally complex. Vehicle crash test experiments are a good example of validating the FEM simulation (yes that's the correct order, not vice versa)
I've been a full-time FEM Analyst for 15 years now. It's generally a nice article, though in my opinion paints a far rosier picture of the last couple decades than is warranted.
Actual, practical use of FEM has been stagnate for quite some time. There have been some nice stability improvements to the numerical algorithms that make highly nonlinear problems a little easier; solvers are more optimized; and hardware is of course dramatically more capable (flash storage has been a godsend).
Basically every advanced/"next generation" thing the article touts has fallen flat on its face when applied to real problems. They have some nice results on the world's simplest "laboratory" problem, but accuracy is abysmal on most real-world problems - e.g. it might give good results on a cylinder in simple tension, but fails horribly when adding bending.
There's still nothing better, but looking back I'm pretty surprised I'm still basically doing things the same way I was as an Engineer 1; and not for lack of trying. I've been on countless development projects that seem promising but just won't validate in the real world.
Industry focus has been far more on Verification and Validation (ASME V&V 10/20/40) which has done a lot to point out the various pitfalls and limitations. Academic research and the software vendors haven't been particularly keen to revisit the supposedly "solved" problems we're finding.
I'm a mechanical engineer, and I've been wanting to better understand the computational side of the tools I use every day. Do you have any recommendations for learning resources if one wanted to "relearn" FEA from a computer science perspective?
I learned it for the first time from this[0] course; part of the course covers deal.ii[1] where you program the stuff you're learning in C++.
[0]: https://open.umich.edu/find/open-educational-resources/engin...
[1]: https://www.dealii.org/
Start with FDM. Solve Bernoulli deflection of a beam
Have a look at FEniCs to start with.
>Basically every advanced/"next generation" thing the article touts has fallen flat on its face when applied to real problems
Even Arnold's work? FEEC seemed quite promising last time I was reading about it, but never seemed to get much traction in the wider FEM world.
stagnate last 15 years??? Contact elements, bolt preload, modeling individual composite fibers, delamination progressive ply failure, modeling layers of material to a few thousandths of an inch. Design optimization. ANSYS Workbench = FEA For Dummies. The list goes on.
I kind of thought Neural Operators were slotting into the some problem domains where FEM is used (based on recent work in weather modelling, cloth modelling, etc) and thought there was some sort of FEM -> NO lineage. Did I completely misunderstand that whole thing?
Those are definitely up next in the flashy-new-thing pipeline and I'm not that up to speed on them yet.
Another group within my company is evaluating them right now and the early results seems to be "not very accurate, but directionally correct and very fast" so there may be some value in non-FEM experts using them to quickly tell if A or B is a better design; but will still need a more proper analysis in more accurate tools.
It's still early though and we're just starting to see the first non-research solvers hitting the market.
Very curious, we are getting good results with PiNN and operators, what's your domain?
I was under the impression that the linear systems that come out of FEM methods are in some cases being solved by neural networks (or partially, e.g. as a preconditioner in an iterative scheme), but I don't know the details.
Could you write a blogpost-style article on how to model the shallow water wave equation on a sphere? The article would start with the simplest possible method, something that could be implemented in short C program, and would continue with a progressively more accurate and complex methods.
If you are interested in this, I'd recommend following an openfoam tutorial, c++ though.
You could do SWE with finite elements, but generally finite volumes would be your choice to handle any potential discontinuities and is more stable and accurate for practical problems.
Here is a tutorial. https://www.tfd.chalmers.se/~hani/kurser/OS_CFD_2010/johanPi...
I'm looking for something like this, but more advanced. The common problem with such tutorials is that they stop with the simplest geometry (square) and the simplest finite difference method.
What's unclear to me is how do I model the spherical geometry without exploding the complexity of the solution. I know that a fully custom mesh with a pile of formulas for something like beltrami-laplace operator would work, but I want something more elegant than this. For a example, can I use the Fibbonacci spiral to generate a uniform spherical mesh, and then somehow compute gradients and the laplacian?
I suspect that the stability of FE or FV methods is rooted in the fact that the FE functions slightly overlap, so computing the next step is a lot like using an implicit FD scheme, or better, a variation of the compact FD scheme. However I'm interested in how an adept in the field would solve this problem in practice. Again, I'm aware that there are methods of solving such systems (Jacobi, etc.), but those make the solution 10x more complex, buggier and slower.
Interesting that this reads almost like an chatgpt prompt.
Lazy people have been lazy forever. I stumbled across an example of this the other day from the 1990s, I think, and was shocked how much the student emails sounded like LLM prompts: https://www.chiark.greenend.org.uk/~martinh/poems/questions....
At least those had some basic politeness. So often I'm blown away not only how people blithely write "I NEED HELP, GIMME XYZ NOW NERDS" but especially how everyone is just falling over themselves to actually help! WTF?
Basic politeness is absolutely dead, nobody has any concept of acknowledging they are asking for a favour; we just blast Instagram/TikTok reels at top volume and smoke next to children and elderly in packed public spaces etc. I'm 100% sure it's not rose-tinted memories of the 90s making me think, it wasn't always like this...
It reminds me of the old joke that half of the students are below average…
Expect in Lake Woebegone, all of the children are above average
But that's not true, unless by "average" you mean the median.
Normally, it's all the same.
Only if the distribution has zero skewness.
Unless "normally" you mean the normal distribution, which indeed has zero skewness.
Yes, it was a admittedly bad pun.
> Could you write a blogpost-style article on how to model the shallow water wave equation on a sphere?
Typically, Finite Volume Method is used for fluid flow problems. It is possible to use Finite Element Methods, but it is rare.
"As an AI language model, I am happy to comply with your request ( https://chatgpt.com/share/6727b644-b2e0-800b-b613-322072d9d3... ), but good luck finding a data set to verify it, LOL."
Have you heard of physics informed neural nets?
It seems like a hot candidate to potentially yield better results in the future
I started my career doing FE modeling and analysis with ANSYS and NASTRAN. Sometimes I miss these days. Thinking about how to simplify a real world problem so far that it is solvable with the computational means available was always fun. Then pushing quads around for hours until the mesh was good had an almost meditative effect. But I don't feel overwhelmingly eager to learn a new software or language.
Much to my surprise, it seems there hasn't been much movement there. ANSYS still seems to be the leader for general simulation and multi-physics. NASTRAN still popular. Still no viable open-source solution.
The only new player seems to be COMSOL. Has anyone experience with it? Would it be worth a try for someone who knows ANSYS and NASTRAN well?
I've used ansys daily for over a decade, and the only movement is in how they name their license tiers. It's a slow muddy death march. Every year I'm fighting the software more and more, the sales men are clearly at the wheel.
They buy "vertical aligned" software, integrate it, then slowly let it die. They just announced they're killing off one of these next year, that they bought ten years ago, because they want to push a competitive product with 20% of the features.
I've been using nastran for half as long but it isn't much better. It's all sales.
I dabbed a bit in abaqus, that seems nice. Probably cause I just dabbed in it.
But here I'm just trying to do my work, and all these companies do is move capabilities around their license tiers and boil the frog as fast as they get away with.
I've gone Abaqus > Ansys > Abaqus/LS-DYNA over my career and hate Ansys with a fiery passion. It's the easiest one to run your first model in, but when you start applying it to real problems its a fully adversarial relationship. The fact you have to make a complete copy of the geometry/mesh to a new Workbench "block" to run a slightly different load case (and you can't read in an orphaned results files) is just horrible.
Abaqus is more difficult to get up to speed in, but its really nice from an advanced usability standpoint. They struggle due to cost though, it is hugely expensive and we've had to fight hard to keep it time and time again.
LS-Dyna is similar to Abaqus (though I'm not fully up in it yet), but we're all just waiting to see how Ansys ruins it, especially now that they got bought out by Synopsys.
I don't know how long ago you used ansys, and i definitely don't want to sell it, but you can share geometry/mesh between those "blocks" (by dragging blocks on top of each other), and you can read in result orphaned result files.
> Still no viable open-source solution.
For the more low-level stuff there's the FEniCS project[1], for solving PDEs using fairly straight forward Python code like this[2]. When I say fairly straight forward, I mean it follows the math pretty closely, it's not exactly high-school level stuff.
[1]: https://fenicsproject.org/
[2]: https://jsdokken.com/dolfinx-tutorial/chapter2/linearelastic...
Interesting. Please bear with me as this is going off 25 year old memories, but my memory is that the workflow for using FEA tools was: Model in some 3D modelling engineering tool (e.g. SolidWorks), ansys to run FEA, iterate if needed, prototype, iterate.
So to have anything useful, you need that entire pipeline? For hobbyists, I assume we need this stack. What are the popular modelling tools?
To get started with Fenics you can maybe use the FEATool GUI, which makes it easier to set up FEA models, and also export Python simulation scripts to learn or modify the Fenics syntax [1].
[1]: https://www.featool.com/tutorial/2017/06/16/Python-Multiphys...
Yeah not my domain so wouldn't really know. For FEniCS I know Gmsh[1] was used. There's some work[2][3] been done to integrate FEniCS with FreeCAD. It seems FreeCAD also supports[4] other FEM solvers.
But, I guess you get what you pay for in this space still.
[1]: https://gmsh.info/
[2]: https://github.com/qingfengxia/Cfd
[3]: https://github.com/qingfengxia/FenicsSolver
[3]: https://wiki.freecad.org/FEM_Solver
You can export other CAD meshes for use in it
FEniCs is mostly used by academic researchers, I used it for FEM modelling in magnetic for e.g. where the sorts of problems we wanted to solve you can’t do in a commercial package.
> For hobbyists, I assume we need this stack.
Just curious what kind of hobby leads to a finite element analysis?
Electronics (when you start to care about EMI or antenna design), model airplanes (for aerodynamics), rocketry, machining (especially if you want to get into SPIF), robotics, 3-D printing (especially for topology optimization), basically anything that deals with designing solid structures in the physical world. Also, computer graphics, including video games.
Unfortunately the barrier to entry is too high for most hobbyists in these fields to use FEM right now.
There are some obvious downsides and exceptions to this sentiment, but on balance, I really appreciate how the expansive access to information via the internet has fostered this phenomenon: where an unremarkable fella with a dusty media studies degree, a well-equipped garage, and probably too much free time can engineer and construct robotic machines, implement/tweak machine vision mechanisms, microwave radio transceivers, nanometer-scale measurements using laser diodes and optical interferometry, deep-sky astrophotography, etc., etc.. Of course, with burgeoning curiosity and expanding access to surplus university science lab equipment, comes armchair experts and the potential for insufferability[0]. It’s crucial to maintain perspective and be mindful of just how little any one person (especially a person with a media studies degree) can possibly know.
[0] I’m pretty sure “insufferability” isn’t a real word. [Edit: don’t use an asterisk for footnotes.]
> comes armchair experts and the potential for insufferability
Hey, I resemble that remark! I'd be maybe a little less armchair with more surplus equipment access, but maybe no less insufferable.
By all accounts, though, a degree of insufferability is no bar to doing worthwhile work; Socrates, Galileo, Newton, Babbage, and Heaviside were all apparently quite insufferable, perhaps as much so as that homeless guy who yells at you about adrenochrome when you walk by his park encampment. (Don't fall into the trap of thinking it's an advantage, though.) Getting sidetracked by trivialities and delusions is a greater risk. Most people spend their whole lives on it.
As for how little any person can know, you can certainly know more than anyone who lived a century ago: more than Einstein, more than Edison, more than Noether, more than Tesla, more than Gauss. Any one of the hobbies you named will put you in contact with information they never had, and you can draw on a century or more of academic literature they didn't have, thanks to Libgen and Sci-Hub (and thus Bitcoin).
And it's easy to know more than an average doctorate holder; all you have to do is study, but not forget everything you study the way university students do, and not fall into traps like ancient aliens and the like. I mean, you can still do good work if you believe in ancient aliens (Newton and Tesla certainly believed dumber things) but probably not good archeological work.
Don't be discouraged by prejudice against autodidacts. Lagrange, Heaviside, and du Châtelet were autodidacts, and Ptolemy seems to have been as well. And they didn't even have Wikipedia or Debian! Nobody gets a Nobel for passing a lot of exams.
IMO, the mathematics underlying finite element methods and related subjects — finite element exterior calculus comes immediately to mind — are interesting enough to constitute a hobby in their own right.
COMSOL's big advantage is it ties together a lot of different physics regimes together and makes it very easy to couple different physics together. Want to do coupled structures/fluid? Or coupled electromagnetism/mechanical? Its probably the easiest one to use.
Each individual physics regime is not particularly good on its own - there are far better mechanical, CFD, electromagnetism, etc solvers out there - but they're all made by different vendors and don't play nicely with each other.
I am hoping this open source FEM library will catch on : https://www.dealii.org/. The deal in deal.II stands for Differential Equation Analysis Library.
It's written in C++, makes heavy use of templates and been in development since 2000. It's not meant for solid mechanics or fluid mechanics specifically, but for FEM solutions of general PDEs.
The documentation is vast, the examples are numerous and the library interfaces with other libraries like Petsc, Trilinos etc. You can output results to a variety of formats.
I believe support for triangle and tetrahedral elements has been added only recently. In spite of this, one quirk of the library is that meshes are called "triangulations".
I've worked with COMSOL (I have a smaller amount of ANSYS experience to compare to). For the most part I preferred COMSOL's UI and workflow and leveraged a lot of COMSOL's scripting capabilities which was handy for a big but procedural geometry I had (I don't know ANSYS's capabilities for that). They of course largely do the same stuff. If you have easy access to COMSOL to try it out I'd recommend it just for the experience. I've found sometimes working with other tools make me recognize some capabilities or technique that hadn't clicked for me yet.
OpenFOAM seems like an opensource option but I have found it rather impenetrable - there are some youtube videos and pdf tutorials, but they are quite dense and specific and doens't seem to cover the entire pipeline
Happy to hear if people have good resources!
Still no viable open-source solution.
Wait? What? NASTRAN was originally developed by NASA and open sourced over two decades ago. Is this commercial software built on top that is closed source?
I’m astonished ANSYS and NASTRAN are still the only players in town. I remember using NASTRAN 20 years ago for FE of structures while doing aero engineering. And even then NASTRAN was almost 40 years old and ancient.
There's a bunch of open source fem solvers e.g. Calculix, Code_Aster, OpenRadioss and probably a few unmaintained forks of (NASA) NASTRAN, but there's no multiphysics package I don't think.
These are at least capable of thermomechanical with fluid-structure coupling. Not all-physics but still multi. True that things with multi species diffusion or electromagnetics are missing, but maybe Elmer can fill the gap.
Abaqus is up there with Ansys aswell as others have mentioned.
Abaqus is pretty big too. I've worked with both Ansys and Abaqus and I generally prefer the latter.
> The only new player seems to be COMSOL
Ouch. I kind of know Comsol because it was already taught in my engineering school 15 years ago, so that it still counts as a “new entrant” really gives an idea of how slow the field evolves.
The COMSOL company was started in 1986....
It used to be called FEMLAB :)
But they changed to COMSOL because they didn't have the trademark in Japan and FEM also gave associations to the feminine gender.
As a recovering fe modeler, I understand completely.
I work in this field and it really is stagnant and dominated by high-priced Ansys/etc. For some reason silicon valley's open sourceness hasn't touched it. For open source, there's CalculiX which is full of bugs and Code Aster which everybody I've heard about it from say it's too confusing to use. CalculiX has Prepomax as a fairly new and popular pre/post.
Once you have a mesh that's "good enough", you can use any number of numeric solvers. COMSOL has a very good mesher, and a competent geometry editor. It's scriptable, and their solvers are also very good.
There might be better programs for some problems, but COMSOL is quite nice.
During my industrial PhD, I created an Object-Oriented Programming (OOP) framework for Large Scale Air-Pollution (LSAP) simulations.
The OOP framework I created was based on Petrov-Galerkin FEM. (Both proper 2D and "layered" 3D.)
Before my PhD work, the people I worked with (worked for) used spectral methods and Alternate-direction FEM (i.e. using 1D to approximate 2D.)
In some conferences and interviews certain scientists would tell me that programming FEM is easy (for LSAP.) I always kind of agree and ask how many times they have done it. (For LSAP or anything else.) I was not getting an answer from those scientists...
Applying FEM to real-life problems can involve the resolving of quite a lot of "little" practical and theoretical gotchas, bugs, etc.
> Applying FEM to real-life problems can involve the resolving of quite a lot of "little" practical and theoretical gotchas, bugs, etc.
FEM at it's core ends up being just a technique to find approximate solutions to problems expressed with partial differential equations.
Finding solutions to practical problems that meet both boundary conditions and domain is practically impossible to have with analytical methods. FEM trades off correctness with an approximation that can be exact in prescribed boundary conditions but is an approximation in both how domains are expressed and the solution,and has nice properties such as the approximation errors converging to the exact solution by refining the approximation. This means exponentially larger computational budgets.
I also studied FEM in undergrad and grad school. There's something very satisfying about breaking an intractably difficult real-world problem up into finite chunks of simplified, simulated reality and getting a useful, albeit explicitly imperfect, answer out of the other end. I find myself thinking about this approach often.
A 45 comment thread at the time https://news.ycombinator.com/item?id=33480799
Predicting how things evolve in space-time is a fundamental need. Finite element methods deserve the glory of a place at the top of the HN list. I opted for "orthogonal collocation" as the method of choice for my model back in the day because it was faster and more fitting to the problem at hand. A couple of my fellow researchers did use FEM. It was all the rage in the 90s for sure.
Interesting perspective. I just attended an academic conference on isogeometric analysis (IGA), which is briefly mentioned in this article. Tom Hughes, who is mentioned several times, is now the de facto leader of the IGA research community. IGA has a lot of potential to solve many of the pain points of FEM. It has better convergence rates in general, allows for better timesteps in explicit solvers, has better methods to ensure stability in, e.g., incompressible solids, and perhaps most exciting, enables an immersed approach, where the problem of meshing is all but gone as the geometry is just immersed in a background grid that is easy to mesh. There is still a lot to be done to drive adoption in industry, but this is likely the future of FEM.
> IGA has a lot of potential to solve many of the pain points of FEM.
Isn't IGA's shtick just replacing classical shape functions with the splines used to specify the geometry?
If I recall correctly convergence rates are exactly the same, but the whole approach fails to realize that, other than boundaries, geometry and the fields of quantities of interest do not have the same spatial distributions.
IGA has been around for ages, and never materialized beyond the "let's reuse the CAD functions" trick, which ends up making the problem more complex without any tangible return when compared with plain old P-refinent. What is left in terms of potential?
> Tom Hughes, who is mentioned several times, is now the de facto leader of the IGA research community.
I recall the name Tom Hughes. I have his FEM book and he's been for years (decades) the only one pushing the concept. The reason being that the whole computational mechanics community looked at it,found it interesting, but ultimately wasn't worth the trouble. There are far more interesting and promising ideas in FEM than using splines to build elements.
> Isn't IGA's shtick just replacing classical shape functions with the splines used to specify the geometry?
That's how it started, yes. The splines used to specify the geometry are trimmed surfaces, and IGA has expanded from there to the use of splines generally as the shape functions, as well as trimming of volumes, etc. This use of smooth splines as shape functions improves the accuracy per degree of freedom.
> If I recall correctly convergence rates are exactly the same
Okay, looks like I remembered wrong here. What we do definitely see is that in IGA you get the convergence rates of higher degrees without drastically increasing your degree of freedom, meaning that there is better accuracy per degree of freedom for any degree above 1. See for example Figures 16 and 18 in this paper: https://www.researchgate.net/profile/Laurens-Coox/publicatio...
> geometry and the fields of quantities of interest do not have the same spatial distributions.
Using the same shape functions doesn't automatically mean that they will have the same spatial distributions. In fact, with hierarchical refinement in splines you can refine the geometry and any single field of interest separately.
> What is left in terms of potential?
The biggest potential other than higher accuracy per degree of freedom is perhaps trimming. In FEM, trimming your shape functions makes the solution unusable. In IGA, you can immerse your model in a "brick" of smooth spline shape functions, trim off the region outside, and run the simulation while still getting optimal convergence properties. This effectively means little to no meshing required. For a company that is readying this for use in industry, take a look at https://coreform.com/ (disclosure, I used to be a software developer there).
I took a course in undergrad, and was exposed to it in grad school again, and for the life of me I still don't understand the derivations either Galerkin or variational.
I learned from the structural engineering perspective. What are you struggling with? In my mind I have this logic flow: 1. strong form pde; 2. weak form; 3. discretized weak form; 4. compute integrals (numerically) over each element; 5. assemble the linear system; 6. solve the linear system.
Luckily the integrals of step 4 are already worked out in text books and research papers for all the problems people commonly use FEA for so you can almost always skip 1. 2. and 3.
Do you have any textbook recommendations for the structural engineering perspective?
For anyone interested in a contemporary implementation, SELF is a spectral element library in object-oriented fortran [1]. The devs here at Fluid Numerics have upcoming benchmarks on our MI300A system and other cool hardware.
[1] https://github.com/FluidNumerics/SELF
I have such a fondness for FEA. ANSYS and COSMOS were the ones I used, and I’ve written toy modelers and solvers (one for my HP 48g) and even tinkered with using GPUs for getting answers faster (back in the early 2000s).
Unfortunately my experience is that FEA is a blunt instrument with narrow practical applications. Where it’s needed, it is absolutely fantastic. Where it’s used when it isn’t needed, it’s quite the albatross.
My hot take is that, FEM is best used as unit testing of Machine Design, not a guide towards design that it’s often used as. The greatest mechanical engineer I know, once designed an entire mechanical wrist arm with five fingers, actuations, lots of parts and flexible finger tendon. He never used FEM at any part of his design. He instead did it in the old fashioned, design and fab a simple prototype, get a feel for it, use the tolerances you discovered in the next prototype and just keep iterating quickly. If I went to him and told him to model the flexor of his fingers in FEM, and then gave him a book to tell him how to correctly use the FEM software so that you got non “non-sensical” results I would have slowed him down if anything. Just build and you learn the tolerances, and the skill is in building many cheap prototypes to get the best idea of what the final expensive build will look like.
> The greatest mechanical engineer I know, [...]
And with that you wrote the best reply to your own comment. Great programmers of the past wrote amazing systems just in assembly. But you needed to be a great programmer just to get anything done at all.
Nowadays dunces like me can write reasonable software in high level languages with plenty of libraries. That's progress.
Similar for mechanical engineering.
(Doing prototypes etc might still be a good idea, of course. My argument is mainly that what works for the best engineers doesn't necessarily work for the masses.)
Also, might work for a mechanical arm the size of an arm, but not for the size of the Eiffel tower.
Eiffel Tower was built before FEM existed. In fact I doubt they even did FEM like calculations
This is true, although it was notable as an early application of Euler-Bernoulli beam theory in structural engineering, which helped to prove the usefulness of that method.
I ment a mechanical arm the size of the eifel tower. You don't want to iterate physical products at that size.
Going by Boeing vs. SpaceX, iteration seems to be the most effective approach to building robotic physical products the size of the Eiffel Tower.
I'm sure they are doing plenty of calculations beforehand, too.
Unquestionably! Using FEM.
Would FEM be useful for that kind problem? It's more for figuring out if your structure will take the load, where stress concentrations are, what happens with thermal expansion. FEM won't do much for figuring out what the tolerance need to be on intricate mechanisms
To be fair, FEM is not the right tool for mechanical linkage design (if anything, you'd use rigid body dynamics).
FEM is the tool you'd use to tell when and where the mechanical linkage assembly will break.
Garbage in garbage out. If you don't fully understand the model, then small parameter changes can create wildly different results. It's always good to go back to fundamentals and hand check a simplification to get a feel for how it should behave.
If he were designing a bridge, however ...
Its wrong to assume that everyone and every projects can use an iterative method with endless prototypes. Id you do I have a prototype bridge to sell you.
Good luck designing crash resilient structures without simulating it on FEM based software though.
The FEM is just a model of the crash resistant structure. Hopefully it will behave like the actual structure, but that is not guaranteed. We use the FEM because it is faster and cheaper than doing the tests on the actual thing. However if you have the time and money to do your crash resiliency tests on the actual product during the development phase. I expect the results would be much better.
Yes, with infinite time and budget you'd get much better results. That does not sound like an interesting proposition, though.
I’d guess most of the bridges in US were built before FEM existed
Anyone can design a bridge that holds up. Romans did it millenia ago.
Engineering is designing a bridge that holds up to a certain load, with the least amount of material and/or cost. FEM gives you tighter bounds on that.
The average age of a bridge in the US is about 40-50 years old and the title of the article has "80 years of FEM".
https://www.infrastructurereportcard.org/wp-content/uploads/...
I'd posit a large fraction were designed with FEM.
FEM runs on the same math and theories those bridges were designed on on paper.
They did this just fine until without such tools for the majority of innovation in the last century.
Having worked on the design of safety structures with mechanical engineers for a few projects, it is far, far cheaper to do a simulation and iterate over designs and situations than do that in a lab or work it out by hand. The type of stuff you can do on paper without FEM tends to be significantly oversimplified.
It doesn't replace things like actual tests, but it makes designing and understanding testing more efficient and more effective. It is also much easier to convince reviewers you've done your job correctly with them.
I'd argue computer simulation has been an important component a majority of mechanical engineering innovation in the last century. If you asked a mechanical engineer to ignore those tools in their job they'd (rightly) throw a fit. We did "just fine" without cars for the majority of humanity, but motorized vehicles significantly changed how we do things and changed the reach of what we can do.
> It is also much easier to convince reviewers you've done your job correctly with them.
In other words, the work that doesn't change the underlying reality of the product?
> We did "just fine" without cars for the majority of humanity
We went to the moon, invented aircraft, bridges, skyscrapers, etc, all without FEM. So that's why this is a bad comparison.
> If you asked a mechanical engineer to ignore those tools in their job they'd (rightly) throw a fit.
Of course. That's what they are accustomed to. 80/20 paper techniques that were replaced by SW were forgotten.
When tests are cheap, you make a lot of them. When they are expensive, you do a few and maximize the information you learn from them.
I'm not arguing FEM doesn't provide net benefit to the industry.
What is your actual assertion? That tools like FEA are needless frippery or that they just dumb down practitioners who could have otherwise accomplished the same things with hand methods? Something else? You're replying to a practicing mechanical engineer whose experience rings true to this aerospace engineer.
Things like modern automotive structural safety or passenger aircraft safety are leagues better today than even as recently as the 1980s because engineers can perform many high-fidelity simulations long before they get to integrated system test. When integrated system test is so expensive, you're not going to explore a lot of new ideas that way.
The argument that computational tools are eroding deep engineering understanding is long-standing, and has aspects of both truth and falsity. Yep, they designed the SR-71 without FEA, but you would never do that today because for the same inflation-adjusted budget, we'd expect a lot more out of the design. Tools like FEA are what help engineers fulfill those expectations today.
> What is your actual assertion?
That the original comment I replied to is false: "Good luck designing crash resilient structures without simulating it on FEM based software."
Now what's my opinion? FEM raises the quality floor of engineering output overall, and more rarely the ceiling. But, excessive reliance on computer simulation often incentivizes complex, fragile, and expensive designs.
> passenger aircraft safety are leagues better today
Yep, but that's just restating the pros. Local iteration and testing.
> You're replying to a practicing mechanical engineer
Oh drpossum and I are getting to know each other.
I agree with his main point. It's an essential tool for combatting certifications and reviews in the world of increasing regulatory and policy based governance.
Except that everything's gotten abysmally complex. Vehicle crash test experiments are a good example of validating the FEM simulation (yes that's the correct order, not vice versa)
How can you assert so confidently you know the cause and effect?
Certainly computers allow more complexity, so there is interplay between what it enables and what’s driven by good engineering.
FEM - because we can't solve PDEs!
Is it related to Galerkin?
From "Chaos researchers can now predict perilous points of no return" (2022) https://news.ycombinator.com/item?id=32862414 :
> FEM: Finite Element Method: https://en.wikipedia.org/wiki/Finite_element_method
>> FEM: Finite Element Method (for ~solving coupled PDEs (Partial Differential Equations))
>> FEA: Finite Element Analysis (applied FEM)
> awesome-mecheng > Finite Element Analysis: https://github.com/m2n037/awesome-mecheng#fea
And also, "Learning quantum Hamiltonians at any temperature in polynomial time" (2024) https://arxiv.org/abs/2310.02243 re: the "relaxation technique" .. https://news.ycombinator.com/item?id=40396171
[flagged]