Engineering Mind Blindness, Units..
The Invisible Foundation
Why Your Brain Ignores the Most Important Part
of Every Simulation
By Joseph McFadden Sr.
McFaddenCAE.com
An audiobook exploring why our evolutionary brain architecture makes us blind to unit errors in simulation — and what to do about it.
Let me start with something that has nothing to do with engineering.
Right now, as you listen to this, your eyes are doing something extraordinary. You think you're seeing the world in high definition --- a crisp, continuous panorama of color and detail. But you're not. Not even close. Only about one percent of your visual field, a tiny spot called the fovea, actually sees in high resolution. The other ninety-nine percent? It's a blur. A smear. Like looking through frosted glass.
And yet you don't notice. You never notice. Because your brain is constructing a seamless illusion for you --- filling in the gaps, predicting what should be there, stitching together a world from scraps of data and a lifetime of expectations.
Your brain does this because it has to. It's the most metabolically expensive organ you own --- two percent of your body weight consuming twenty percent of your energy, burning through the equivalent of about twenty watts every second you're alive. In children, the brain can consume up to fifty percent of the body's total energy budget. That's an enormous bill. And evolution, which is the most ruthless cost accountant that ever existed, solved this problem the only way it could: by making your brain a prediction machine.
Karl Friston at University College London formalized this as the free energy principle. The idea is elegant: your brain doesn't process every bit of incoming sensory data. That would be catastrophically expensive. Instead, it builds models of the world and then only pays full attention when reality violates those predictions. The crash of a glass breaking in a quiet room? Your brain snaps to attention --- that's a prediction error, something unexpected. But the hum of the air conditioner, the feel of your shirt against your skin, the peripheral shapes at the edges of your vision? Those get suppressed. Filed away. Treated as noise.
Now here's what I want you to hold in your mind: this exact same mechanism --- the one that makes you a brilliant, energy-efficient survivor in the physical world --- is the one that makes you skip past the units in a calculation. And that skipping isn't laziness. It isn't carelessness. It's your ancient brain doing exactly what it evolved to do: conserve energy by ignoring what it considers background noise.
The problem is that units aren't background noise. They're the deepest signal your work can send about whether you truly understand the physics.
Chapter one, The Signal in the Error
I teach fracture mechanics and lab courses at Fairfield University, and I've been doing failure analysis and simulation for over forty-four years. Every semester, I watch something happen in my classroom that I've also seen happen in professional engineering organizations, in aerospace programs, and in simulation groups at major corporations. It looks different at each level, but it's the same phenomenon.
A student sets up a calculation. Maybe a stress intensity factor, maybe a beam deflection, maybe a natural frequency estimate. The physics is right. The approach is right. The algebra is clean. And then, at the very end, the answer comes out in the wrong units. Or worse, the answer comes out with no units at all --- just a naked number sitting on the page.
Now, some professors would simply dock five points and move on. But I don't see a unit error as something to punish. I see it as a signal. A diagnostic. When a student drops the units, their brain is telling me something important: they haven't yet built an internal model where the units are inseparable from the meaning of the quantity.
Think about what it means to truly understand that stress is force per unit area. Not to recite it. Not to write sigma equals F over A on command. But to feel, at an intuitive level, that when you say "two hundred ten megapascals," you are describing two hundred ten million newtons pushing on every square meter of surface. If you carry that understanding in your bones, you cannot drop the units --- because the units are the meaning. Saying "the stress is two hundred ten" without the megapascals is like saying "the temperature is seventy-two" without specifying Fahrenheit or Celsius. It's not an answer. It's a fragment of one.
This is not about being pedantic. This is about fundamentals. And it's about truly understanding. The unit error on a quiz isn't the disease --- it's the symptom. The disease is surface-level processing, and our brains are wired for it.
Chapter two, Why We Skim the Surface
Let me take you deeper into the neuroscience, because once you see why your brain does this, you can start to fight it.
Remember the fovea --- that one percent of your visual field that actually sees in high resolution? Your brain uses the other ninety-nine percent as a kind of forward patrol. The peripheral blur detects shapes, motion, rough patterns. Your amygdala and attention networks evaluate what deserves a closer look. Something moves fast? The fovea snaps to it. A shape resembles a predator? Your whole system mobilizes. But a small change in a familiar pattern? That gets filtered out. It's not worth the metabolic cost.
Nilli Lavie at University College London has spent decades studying what she calls load theory. Her research shows that when your brain is operating under high cognitive load --- and taking an exam is certainly high cognitive load, and building a finite element model is certainly high cognitive load --- the brain doesn't just deprioritize peripheral information. It actively suppresses it. The visual cortex itself shows reduced activity for anything outside the focus of attention. The eyes see, but the brain does not process.
There's a famous demonstration of this. Researchers ask people to count basketball passes in a video, and while they're counting, a person in a gorilla suit walks right through the middle of the scene. Half the viewers never see the gorilla. The gorilla is fully visible. It's not hidden. It's not subtle. But the brain, loaded with the counting task, literally cannot allocate the resources to perceive it.
Now think about what happens during an exam. The student is reading the problem statement. Recalling formulas. Managing time. Doing algebra under pressure. Cognitive load is at maximum. And what does the brain do with the units? It treats them exactly like the gorilla. They're there. They're visible. But the prediction machine has classified them as peripheral, and they vanish from awareness.
This is why I've stopped treating unit errors as failures of diligence and started treating them as failures of depth. When a student carries units correctly through a calculation, it tells me they understand the physics well enough that the units are part of their thinking --- not an afterthought bolted on at the end. When they drop the units, it tells me we have more work to do on the fundamentals. The unit error is a gift, if you know how to read it.
Chapter three, From Quizzes to Cockpits
Now let me show you how this same mechanism scales. Because the spectrum of unit failures --- from a five-point deduction on a quiz to a catastrophe that makes international headlines --- is not a spectrum of different problems. It's the same problem at different altitudes.
July twenty-third, nineteen eighty-three. Air Canada Flight 143 takes off from Montreal bound for Edmonton with sixty-one passengers aboard. The aircraft is a brand-new Boeing 767 --- one of the first in the fleet calibrated for metric units. The fuel quantity system is malfunctioning, so the ground crew measures fuel manually with a dipstick. They need to convert liters to mass to know how much fuel is on board.
The crew multiplies the volume by one point seven seven --- the density of jet fuel. That number is correct. But it's in pounds per liter, because every other aircraft in the Air Canada fleet uses imperial. The new 767 needs kilograms per liter, which is zero point eight. Nobody catches the mismatch. The flight management computer, which tracks fuel in kilograms, accepts the number without complaint --- just as Abacus accepts whatever numbers you give it without checking units.
The result: the aircraft has less than half the fuel it needs. Over northwestern Ontario, at forty-one thousand feet, both engines flame out. The plane becomes a glider. Through extraordinary airmanship, the captain --- who happened to be a trained glider pilot --- deadsticks the 767 onto an abandoned airstrip in Gimli, Manitoba, that was being used as a go-kart track that afternoon. Sixty-one passengers walk away. The plane earns the nickname "the Gimli Glider."
Now, here's what I want you to notice. The crew was not incompetent. They were experienced professionals. But they were operating under cognitive load: a malfunctioning fuel system, a new aircraft type, time pressure, and a fleet in the middle of transitioning from imperial to metric. Their brains did exactly what brains do: reached for the familiar number, the one that matched their prediction model --- one point seven seven --- and moved on. No prediction error. No surprise signal. No gorilla.
The student who writes "stress equals two hundred ten" without the megapascals and the crew who enters one point seven seven without checking the unit --- they are making the same cognitive error at different altitudes. The brain decided the detail wasn't worth the energy.
Chapter four, Three Hundred Twenty-Seven Million Dollars
Sixteen years after the Gimli Glider, the altitude got even higher.
September twenty-third, nineteen ninety-nine. NASA's Mars Climate Orbiter arrives at Mars after a nine-month journey. The spacecraft is supposed to enter orbit at an altitude of about two hundred twenty-six kilometers. Instead, it comes in at fifty-seven kilometers --- deep in the atmosphere where no spacecraft can survive. It burns up. Three hundred twenty-seven million dollars, gone.
The root cause was almost insultingly simple. Lockheed Martin, who built the spacecraft, wrote software that calculated thruster impulse in pound-force seconds. NASA's Jet Propulsion Laboratory, who navigated the spacecraft, expected those values in newton-seconds. The conversion factor between the two is four point four five. Every single trajectory correction over nine months of flight was off by that factor. The errors accumulated, nudging the spacecraft closer and closer to Mars on a path that would ultimately destroy it.
Edward Weiler, NASA's associate administrator for space science, said something afterward that cuts right to the heart of our story. He said: "The problem here was not the error. It was the failure of NASA's systems engineering, and the checks and balances in our processes, to detect the error."
Read that again. The problem was not the error. The problem was that no one detected it. For nine months. Across two organizations. With hundreds of engineers. The pound-force values looked like numbers. They were in the right format, in the right range. They passed the brain's quick pattern check. No prediction error. No surprise signal. The gorilla walked through the basketball game for nine months and nobody saw it.
Now let me ask you: is this fundamentally different from the student who writes "two hundred ten" without the megapascals? The mechanism is identical. The brain encounters a number that looks about right, classifies it as "handled," and moves its limited attention to whatever seems more interesting or more urgent. On a quiz, the cost is five points. At Mars, the cost is a spacecraft.
Chapter five, One Point Three Millimeters
Let me give you one more from the high end of the spectrum, because this one is the most haunting. It's not strictly a unit conversion error, but it's the same cognitive phenomenon --- a measurement that everyone assumed was correct because the system they trusted told them it was.
The Hubble Space Telescope. One point five billion dollars. The most sophisticated optical instrument ever built. Launched in April of nineteen ninety by the Space Shuttle Discovery. Six weeks later, the first images came down. They were blurry.
The primary mirror --- ninety-four and a half inches across, polished to a smoothness measured in millionths of an inch --- had been ground to the wrong shape. It was too flat near the outer edge by about two microns. Two millionths of a meter. The most precisely wrong mirror in the history of optics.
The cause? A measuring device called a reflective null corrector, used to check the mirror's curvature during fabrication, had a lens that was positioned one point three millimeters off. That's it. A millimeter and a third. Three washers had been inserted to fill a gap in the device, and they shifted the reference point just enough to guide the polishing machine to sculpt a perfect mirror to exactly the wrong specification.
Here's the part that matters for our story: the error was detected. Twice. A second null corrector, built with lenses instead of mirrors, clearly showed that something was wrong with the primary mirror's shape. The technicians dismissed it. They assumed the cruder device was poorly calibrated, not that the precision device was in error. They trusted their prediction model --- "we have the most perfect mirror ever ground by humans on Earth" --- and discounted the evidence that contradicted it. The brain saw the conflicting data and resolved the conflict by rejecting the surprise rather than investigating it.
It took a seven-hundred-million-dollar shuttle repair mission three years later to install corrective optics --- essentially giving the most expensive telescope in history a pair of glasses.
One point three millimeters. Two microns of mirror shape. A number that someone assumed was right because the instrument they trusted said so. This is the prediction machine at its most dangerous: not when it ignores evidence, but when it actively overrides contradictory evidence to protect the existing model.
And that is exactly what happens when an experienced engineer glances at a density value of seven point eight five E minus nine, says "looks right," and moves on. The prediction model says the number is fine. The brain sees no reason to spend energy on a deeper check. And most of the time, the prediction is correct. But when it's wrong, the consequences scale with the altitude of the project.
Chapter six, The Full Spectrum
So let me lay out the spectrum explicitly, because I think seeing it end to end is what makes the point land.
At the bottom: a student loses five points on a quiz for dropping units from a stress calculation. The physics was right. The intuition was incomplete. The unit error is a signal that we need to build deeper understanding. I don't punish this --- I use it as a teaching moment. We talk about what stress physically means. We talk about what it would feel like to have two hundred ten million newtons pushing on a square meter of your desk. Once the student can feel the unit, they stop dropping it.
One level up: a junior engineer builds a finite element model using material properties from a vendor's datasheet. The vendor uses grams, millimeters, and milliseconds. The engineer's model uses tonnes, millimeters, and seconds. The Young's modulus is the same in both systems --- two hundred ten thousand. Only the density is different: seven point eight five E minus three versus seven point eight five E minus nine. Six orders of magnitude. The model runs. It converges. It produces beautiful contour plots. The mass is wrong by a factor of a million. Nobody catches it because the results look like results.
One level up: a design team validates a product for a one-meter drop test. They inherit a simulation model and don't verify the unit system. The drop velocity is entered as forty-four hundred millimeters per second in a model that expects millimeters per millisecond. The simulated impact is a thousand times too energetic, or a thousand times too mild, depending on which direction the error goes. The physical prototype passes the test. The simulation didn't predict it. Or worse, the simulation said it would pass when it shouldn't, and the product fails in the field.
Higher: the Gimli Glider. Sixty-one passengers at forty-one thousand feet with no fuel. Pounds instead of kilograms. A two-point-two-times error. Survived by extraordinary luck and skill.
Higher still: Tokyo Disneyland, two thousand three. The Space Mountain roller coaster derailed because replacement parts were ordered from drawings that hadn't been converted to metric. The axles were forty-four point one four millimeters instead of forty-five. Less than a millimeter of difference. Enough to cause a structural failure on a ride full of people.
And at the top: the Mars Climate Orbiter. Pound-force seconds instead of newton-seconds. A four-point-four-five-times error. Nine months of accumulated trajectory drift. Three hundred twenty-seven million dollars vaporized in the Martian atmosphere. And right alongside it, Hubble --- a one-point-three-millimeter positioning error in a measuring device, producing the most precisely wrong mirror ever made, blinding a one-point-five-billion-dollar telescope.
Every single one of these --- from the quiz to Mars --- is the same brain, making the same energy-conservation decision: "that detail doesn't warrant my attention right now." The magnitude of the consequence is different. The cognitive mechanism is identical.
Chapter seven, Units as DNA
I use the word DNA deliberately, and not loosely. Think about what DNA actually does. It doesn't just describe an organism. It encodes the fundamental instructions that determine whether every downstream process produces the correct result. If there's an error in the DNA --- even a single base pair --- the proteins fold wrong, the cells malfunction, the organism fails. And the error can be tiny. One substitution in three billion base pairs can cause sickle cell disease.
That's exactly what unit consistency does in a simulation. Every number in your model --- every modulus, every density, every velocity, every force --- is meaningless without its unit context. And unlike DNA, which at least has error-correction mechanisms built into the cellular machinery, Abacus has no unit system at all. None. Zero. The software enforces nothing. It accepts whatever numbers you give it and solves the math. If the math is dimensionally inconsistent, the software doesn't know and doesn't care. It just gives you an answer. And that answer is fiction dressed up in colorful contour plots.
Your stiffness matrix is built from Young's modulus and element geometry. Your mass matrix is built from density and element geometry. When the solver assembles these matrices, it must assume that the length units in your modulus, your density, your geometry, and your boundary conditions are all the same. If they're not, every eigenvalue, every stress, every displacement is proportionally wrong. But --- and this is what makes units so treacherous --- the results still look like results. You still get stress distributions. You still get displacement fields. The model runs. It converges. It just produces wrong answers with complete confidence.
This is why I tell my students: if you truly understand a quantity --- if you feel what it means physically --- you can't separate it from its units. The units aren't labels you attach after the calculation. They're embedded in the meaning of the number from the moment it enters your model. When someone writes a density of seven point eight five E minus nine and knows, without checking a table, that this means tonnes per cubic millimeter, they have internalized the unit system. When someone writes seven point eight five E minus nine and doesn't know what unit that density is in, they're operating at the surface. And surface-level processing is where the errors live.
Chapter eight, The Systems and Their Fingerprints
So let's build some of that depth. Because knowing your unit system cold --- having it in your bones, not just in your notes --- is the antidote to your brain's natural tendency to skim.
In structural simulation, the most common system, and the one I recommend as your default, is tonne, millimeter, second. Stress comes out in megapascals. Force in newtons. Energy in millijoules. Steel has a Young's modulus of two hundred ten thousand, a density of seven point eight five times ten to the minus nine, and gravity is nine thousand eight hundred ten millimeters per second squared.
That density --- seven point eight five E minus nine --- is the fingerprint. If you take away one thing from this entire audiobook, take this: density magnitude is your most reliable tool for identifying which unit system a model is using. It's the one number that changes dramatically between systems while the others can stay in familiar ranges.
SI base units --- kilogram, meter, second --- give you steel density at seven thousand eight hundred fifty. Young's modulus at two point one times ten to the eleven. That modulus value in the hundreds of billions is the giveaway.
Now here's where it gets dangerous. In explicit dynamics --- drop tests, crash, impact --- many vendors use millisecond-based unit systems. The popular one is gram, millimeter, millisecond. It has a beautiful property: steel's Young's modulus stays at two hundred ten thousand, the same number you know from tonne-millimeter-second. But the density changes from seven point eight five E minus nine to seven point eight five E minus three. Same modulus. Density off by a factor of a million.
This system works because of a precise mathematical cancellation. Young's modulus has units of mass over length times time squared. When you switch time from seconds to milliseconds, time squared changes by a factor of a million. But when you simultaneously switch mass from tonnes to grams --- also a factor of a million --- the two cancel. E stays the same. But the density must change accordingly. If someone keeps the tonne-millimeter density and enters millisecond-scale time periods, the model runs without error and every result is wrong.
This is a trap that catches experienced engineers specifically because the modulus value looks correct. The brain checks E, sees two hundred ten thousand, says "that's right," and moves on. The prediction machine is satisfied. But the density is in the wrong system, and the mass is off by a factor of a million. Natural frequencies off by a thousand. Wave speeds wrong. Energy balance meaningless. And no error message anywhere.
The imperial system adds its own trap. In inches, pounds-force, and seconds, the consistent mass unit is the slinch --- about a hundred seventy-five kilograms. Steel density is seven point three three E minus four slinches per cubic inch. But if you use the machinist's handbook value of zero point two eight four --- which is in pounds-mass per cubic inch --- your model is three hundred eighty-six times too heavy. And zero point two eight four looks more reasonable than seven point three three E minus four. The wrong number passes the prediction check. The right number triggers doubt. Your brain's energy-conservation bias pushes you toward the error.
Chapter nine, Building Awareness, Not Punishment
So what do we do about this? How do you fight three hundred thousand years of evolutionary conditioning that tells your brain to skip the small stuff?
The answer is not punishment. Not stricter rules. Not more red ink on quizzes. The answer is awareness --- bringing the unconscious into the conscious, making the invisible visible.
In the classroom, when I see a unit error, I don't just mark it wrong. I stop and ask the student: "What does that number physically mean? What would it feel like? How heavy is that? How fast is that? What would you see if you stood next to this event?" Because if they can answer those questions --- if they can feel the physics --- the units come along for free. You don't need to remind someone who truly understands stress to write megapascals. The unit is part of their mental image.
This is what I mean by building intuition before equations. The equation is a tool. The intuition is the understanding. When the intuition is deep enough, the units are obvious. When the intuition is shallow, the units fall off. So the unit error on the quiz isn't a failure of discipline. It's a signal that I, as the teacher, need to help build a deeper internal model.
In professional practice, the same philosophy applies, but you need external systems to supplement the internal ones. Here's what I recommend:
First: know the density fingerprint. For every unit system you work in, memorize the density of steel. Seven point eight five E minus nine is tonne-millimeter-second. Seven point eight five E minus three is gram-millimeter-millisecond. Seven point eight five E minus six is kilogram-millimeter-second. Seven point three three E minus four is slinch-inch-second. Each one unique. Each one unambiguous if you know what to look for.
Second: check the mass. After defining materials and meshing geometry, calculate total model mass and compare it to the physical part. If your fifty-millimeter steel cube should weigh about one kilogram and the model says a thousand kilograms, you have a unit problem. Thirty seconds. Catches almost everything. NASA's own post-failure review of the Mars Climate Orbiter recommended exactly this kind of independent verification --- a simple sanity check that no one performed for nine months.
Third: check the drop height. If you have an initial velocity, convert it back to a drop height using h equals v-squared over two-g. A one-meter drop in tonne-millimeter-second gives a velocity of about forty-four hundred millimeters per second. If that number doesn't match the physical test, something is wrong. This is your second sanity check.
Fourth: treat vendor models as unverified until you've checked the density. Not the modulus --- the density. Because the modulus can be identical across unit systems that have wildly different densities. The modulus lies to your prediction machine. The density tells the truth.
Fifth: use tools. We built automatic unit detection into our model analyzer for exactly this reason --- because human brains aren't reliable at this task. The tool reads material properties, checks density magnitudes, checks modulus magnitudes, matches against a library of twenty-three common engineering materials, and tells you what unit system the model is using. When it detects mixed signals, it warns you. It doesn't suffer from inattentional blindness. It doesn't have a prediction machine that decides the density looks close enough.
Chapter ten, The Energy Budget of Understanding
Let me close with something from the neuroscience that I find genuinely hopeful.
Sharna Jamadar at Monash University and her colleagues have studied the metabolic cost of cognition --- how much extra energy the brain actually uses when you're doing focused, effortful thinking versus resting. The answer is surprising: only about five percent more. The infrastructure is already running. Ninety-five percent of your brain's energy budget goes to the baseline cost of keeping ninety billion neurons alive and ready to think. The actual cognitive work --- the deliberate, focused, System Two thinking that Daniel Kahneman describes --- is a marginal addition.
What this means is that the metabolic cost of pausing to check your units is trivial. Almost nothing. Your brain resists doing it not because it's expensive, but because interrupting the prediction machine takes conscious effort. The ancient circuitry says: "I've already classified that as background. Moving on." Overriding that classification is what costs you --- not energy, but will.
And this connects back to education in a profound way. When we help students build intuition deep enough that units become inseparable from meaning, we're not asking them to spend extra willpower on every calculation for the rest of their careers. We're actually rewiring their prediction machine. We're upgrading the model that the brain uses to decide what deserves attention. Once the prediction model includes units as part of "what this number means," the units stop being peripheral. They become part of the foveal view. The brain doesn't have to fight to see them anymore. They're just there.
That's the real goal. Not discipline. Not checklists --- although checklists help. The real goal is an internal model so complete that the gorilla can't pass through the room without being seen. A model where seven point eight five E minus nine doesn't just register as "a small number" --- it registers as "that's the density of steel in tonne-millimeter-second, so my stresses will be in megapascals and my forces will be in newtons." When you hear that in your own head without effort, you've arrived.
Chapter eleven, The Holistic View
I want to close with something broader.
We live in an era where simulation software is increasingly powerful and increasingly accessible. An engineer today can mesh a complex geometry, apply loading, and generate results in a fraction of the time it took twenty years ago. The tools have gotten better. But the tools have also made it easier to skip the foundations. And here's the parallel to how our brains work: the more automated the process becomes, the more our cognitive system treats it as "handled." The prediction machine says: "The software is sophisticated, the mesh looks good, the contour plot is smooth --- everything is probably fine."
But the Mars Climate Orbiter's navigation software was sophisticated too. The Hubble mirror was polished by the most advanced fabrication process on the planet. The Gimli Glider's 767 was a state-of-the-art aircraft. Sophistication doesn't prevent foundational errors. It can actually mask them, by producing outputs that look professional and credible regardless of whether the inputs were right.
In sixteen twenty-eight, the Swedish warship Vasa was the pride of the fleet --- sixty-four guns, ornate carvings, the most ambitious warship ever built. It sank in Stockholm harbor on its maiden voyage, less than a mile from shore. Modern analysis has shown that the two teams working on opposite sides of the hull used different measurement systems. One used Swedish feet. The other used Amsterdam feet. The asymmetry contributed to a ship that was fatally top-heavy. Four centuries later, we're still making the same kind of error.
This is why I believe the holistic approach matters. Units aren't a separate topic from materials, which aren't a separate topic from element types, which aren't a separate topic from boundary conditions. They're all one system. And the unit system is the base pair that all of it rests on. Get the units wrong, and every material property is fiction, every stiffness matrix is wrong, every result is untrustworthy. Get them right, and you've earned the foundation on which everything else can be built.
The next time you open a model --- especially one you didn't build --- I want you to think about your prediction machine. Think about what your brain is automatically classifying as background. Think about the gorilla. And then spend five seconds --- five seconds that cost your brain almost nothing in actual energy --- looking at the density value with your fovea, not your periphery.
Those five seconds might be the most important part of your entire analysis. Or they might prevent the next quiz from losing five points. Or they might be the difference between a product that works and one that fails in the field. Or --- if you're working at the right altitude --- they might save three hundred twenty-seven million dollars.
The mechanism is the same. The awareness is the cure. And awareness isn't about being pedantic. It's about understanding deeply enough that the invisible becomes visible.
All thoughts and ideas are my own, formatted and expanded with Claude AI --- not to be told what to write, but to debate and build upon the work.
This has been Joe McFadden. McFaddenCAE.com. Thank you for listening.
ABOUT THE AUTHOR
Joseph McFadden Sr. is an Engineering Fellow at Zebra Technologies leading the MEAS (Mechanical Engineering Analysis & Services) team, and a Professor of Mechanical Engineering at Fairfield University. With over 44 years of experience spanning failure analysis, CAE simulation, materials science, and expert witness work, he was one of three pioneers who brought Moldflow simulation technology to North America. He writes and teaches under the “Holistic Analyst” and “Building Intuition Before Equations” brands, exploring the intersection of engineering simulation, neuroscience, and systems thinking.
This audiobook is part of the FEA Best Practices series. For more content, tools, and the Abaqus INP Analyzer, visit McFaddenCAE.com.