FEA Best Practices Vol 3
FEA Best Practices
—
Volume 3: When Things Collide
Joe McFadden
McFaddenCAE.com
FEA Learning Center • Audiobook Script
In Volume 2, we explored the perturbation family -- modal analysis, harmonic response, random vibration, and shock response spectrum. All of them shared one defining characteristic: linearity. The solver worked with a frozen, linearized snapshot of the system. No contact. No yielding. No large deformations. And for small-amplitude vibration problems, that's perfectly valid and extraordinarily efficient.
But the real world isn't always small-amplitude.
Sometimes things collide. Products hit the floor. Cars hit barriers. Explosive bolts fire and stages separate. Structures deform permanently. Materials fracture. Surfaces slam together, slide, separate, and slam together again.
This is the explicit dynamics world. And everything changes.
In the perturbation world, contact was prohibited. In the explicit world, contact is essential -- it's how you capture the physics of things touching. In the perturbation world, materials were linear elastic. In the explicit world, metals yield, plastics crack, rubber stretches to three times its length. In the perturbation world, we assumed small displacements. In the explicit world, a phone case bends, deforms, and permanently reshapes on impact.
This volume covers five topics that together form a complete picture of nonlinear explicit simulation: shock analysis tells you how to simulate sudden, violent events. Contact formulations explain how surfaces interact during those events. Drop test workflow ties everything together in the most common real-world application. Jerk and fragility challenge us to look beyond peak acceleration and ask what really causes damage. And bulk viscosity addresses a specific numerical challenge that arises when shock waves travel through your mesh.
Shock Analysis -- Capturing the Transient
Shock analysis simulates the response of a structure to a sudden impact or rapid acceleration event. Unlike everything in Volume 2, this is a time-domain simulation. It captures the actual transient behavior as the structure responds -- the accelerations, the stresses, the deformations, all evolving in real time, millisecond by millisecond.
Think of dropping your phone onto a hard floor. That's a shock event. We want to know three things: What are the peak accelerations during impact? What stresses develop in the components? And does anything break?
Notice the difference from the perturbation world. In modal analysis, we asked what the natural frequencies are -- a property of the system itself, independent of any loading. In harmonic response, we asked what the steady-state vibration looks like -- the settled behavior after all transients die out. Here, we're asking about the transient itself. The first few milliseconds. The initial impact, the peak forces, the rebound. The transient is the answer.
And because we're capturing real, nonlinear transient behavior, shock analysis uses a general analysis step -- not a perturbation step. Dynamic Explicit or Dynamic Implicit. This means full support for contact, material nonlinearity, geometric nonlinearity, and large deformations. Every restriction from Volume 2 is lifted.
So when do you choose explicit versus implicit?
Explicit is your tool for short-duration, high-velocity events. Drop tests lasting a few milliseconds. Crash simulations. Pyroshock. Ballistic impact. The explicit solver advances time in tiny increments -- automatically calculated from your mesh and material properties -- and never needs to solve a system of equations or iterate for convergence. It just marches forward. Fast, robust, and stable for violent, chaotic events.
Implicit is better for longer-duration, lower-velocity events. Seismic loading lasting tens of seconds. Slow crush tests. Events where the dynamics are important but the time scale is long enough that explicit would require too many tiny time increments to be practical. The implicit solver takes larger steps but must iterate for convergence at each one, which can be challenging when contact states change rapidly.
The loading comes in two flavors.
Direct application: you define an acceleration-versus-time curve and apply it as base excitation. This is common for qualification testing where you have a specified shock pulse -- a half-sine at 100 g's for 11 milliseconds, for example.
Actual impact modeling: you include the impactor or ground surface, define contact between the bodies, and apply an initial velocity. For a drop from height h, that velocity equals the square root of 2 g h. A one-meter drop gives 4.43 meters per second. 1.5 meters gives 5.42. This approach captures the real physics -- the contact force develops naturally from the collision rather than being prescribed -- but requires more modeling effort.
Damping matters in shock analysis, and here's why. Without damping, your model rings forever after the impact. Every mode that gets excited keeps vibrating indefinitely at its natural frequency. That's not physical -- real structures dissipate energy through material hysteresis, friction at joints, and radiation into surrounding media. Rayleigh damping is the most common approach, combining mass-proportional and stiffness-proportional terms. Typical structures have about 2 to 5 percent of critical damping.
The time period must cover the full event. For a phone drop, that's typically 5 to 10 milliseconds -- but don't cut it short. The product often bounces, and the second impact can be worse than the first. Make sure your simulation runs long enough to capture secondary impacts and rebounds.
And here's the severity scale, just to give you perspective. A one-meter drop test: peak deceleration typically 100 to 200 g's, lasting 1 to 5 milliseconds. Pyroshock from explosive bolt separation: peaks of 1,000 to 10,000 g's, under 1 millisecond, with high-frequency content above 1,000 Hertz. Seismic events at the other extreme: peak accelerations of only 0.1 to 2 g's, but lasting 10 to 30 seconds. The dynamic range across shock applications is enormous.
Post-processing always starts with the energy balance -- and we'll cover that in detail in Volume 4. For now, the rule is: check it first, check it always. If the energy doesn't make sense, nothing else does either.
Now, in shock analysis, parts collide. And when parts collide, you need contact. Let's talk about how that works.
Contact Formulations -- How Surfaces Interact
Contact is one of the most complex aspects of finite element analysis, and choosing the right formulation can mean the difference between a simulation that runs smoothly and one that fails, doesn't converge, or gives you non-physical results.
But first, let me reinforce something that connects back to Volume 2. Contact is only available in general analysis steps -- Dynamic Explicit, Dynamic Implicit, Static General. If you're running a perturbation step -- modal analysis, harmonic response, random vibration, response spectrum -- contact is not supported. This is one of the most common errors when converting a dynamic model for use in frequency analysis. You must replace every contact interaction with tie constraints, multi-point constraints, or coupling constraints. Every single one. Miss one, and you'll get either wrong results or an error.
With that reminder established, let's talk about how contact works in the general steps where it belongs.
Abacus offers two main approaches: General Contact and Contact Pairs.
General Contact is the newer, simpler approach. You tell the solver to detect and enforce contact between all surfaces automatically. One definition handles the entire model. The solver figures out which surfaces are near each other at every increment and enforces no-penetration. It handles self-contact naturally -- a sheet metal part folding onto itself, a rubber seal compressing into its own surface. It works with both explicit and implicit solvers.
For explicit dynamics -- crash, drop test, impact -- General Contact is almost always the right choice. The chaotic nature of impact events means you often can't predict in advance which parts will touch. A phone drops on a corner and the battery shifts inside the housing, hitting the circuit board, which deflects into the back cover. You didn't plan for that sequence of contacts, but General Contact catches all of them automatically.
Contact Pairs is the traditional approach. You manually define each pair of surfaces that might interact. Surface A contacts Surface B. Surface C contacts Surface D. You must identify every potential interaction yourself.
The advantage is finer control. Different friction coefficients for each pair. Different enforcement algorithms. Different contact properties for steel-on-rubber versus steel-on-steel. For implicit static or quasi-static analyses -- assembly loading, press fits, bolted joint preload -- Contact Pairs often work better because implicit convergence can be sensitive, and precise control over each interface helps the solver find a solution.
Under the hood, two enforcement algorithms dominate.
The penalty method is the default for explicit. Think of it as placing a very stiff spring between surfaces when they try to penetrate. The deeper the penetration, the larger the restoring force. It's fast and robust, but it allows a small amount of penetration -- usually negligible, but worth checking.
The kinematic method is available for Contact Pairs in explicit. Instead of springs, it enforces zero penetration by correcting node positions after each time increment. More exact, but less robust for complex contact scenarios.
For implicit, you choose between penalty -- allowing tiny penetration but aiding convergence -- and direct enforcement -- enforcing zero penetration exactly but potentially causing convergence difficulties. There's a trade-off, and the right choice depends on how sensitive your results are to small penetration versus how robust your convergence needs to be.
Surface definitions matter more than people realize. The general rule: the finer mesh should be the secondary surface, the coarser mesh the primary. If meshes are similar, the stiffer body should be primary. Getting this backwards causes penetration on the coarser side -- the primary surface nodes push through the secondary surface because the coarser mesh can't resolve the contact geometry accurately.
Friction uses a Coulomb model. Dry steel on steel: about 0.3 to 0.5. Lubricated surfaces: 0.05 to 0.15. Rubber on hard surfaces: 0.5 to 1.0 or higher. Using a wrong friction coefficient doesn't just affect sliding -- it affects the entire load path through the assembly. In a drop test, friction between the product and the floor determines whether the product slides, sticks, or rotates on impact. That changes which corner takes the load, which changes where the stresses peak.
Common contact problems and how to address them.
Initial overclosures -- parts overlapping at the start of the analysis -- cause large, artificial forces in the first increment. Always inspect your assembly for initial penetrations before running. Abacus can adjust small overclosures automatically, but large ones need to be fixed in the geometry or mesh.
Excessive penetration in explicit -- if surfaces are passing through each other, increase the penalty stiffness, refine the secondary mesh, or switch to kinematic enforcement.
Non-convergence in implicit -- reduce the increment size, add contact stabilization, switch to penalty enforcement, or increase friction slightly. Implicit contact is inherently iterative, and the solver needs to determine the contact state -- open, sliding, or sticking -- at every node, every iteration.
Chattering -- nodes oscillating rapidly between contact and separation -- add friction, adjust stabilization, or refine the mesh at the contact interface.
Now we have the analysis framework -- explicit dynamics for transient events -- and the contact mechanics to handle surface interactions. Let's put it all together in the most common real-world application.
Drop Test -- the Complete Workflow
Drop test simulation is one of the most common and most practically important applications of explicit dynamics. If you work in consumer electronics, packaging, medical devices, or any product that gets shipped and handled, you will encounter drop test requirements. And the complete workflow ties together everything we've discussed -- materials from Volume 1, analysis type selection, contact formulations, and post-processing validation.
A drop test simulates a product falling from a specified height onto a hard surface. Standard heights vary by industry -- 1.5 meters for phones, 76 centimeters for laptops, various heights for packaging. The goals are to predict whether anything breaks, identify stress concentrations, and optimize the design before building physical prototypes. That last point is the real value -- each physical prototype costs money and time. Each simulation costs compute cycles. The economics of virtual testing are overwhelming.
Let me walk through the complete workflow, step by step.
Geometry preparation. Start with your CAD model and simplify for analysis. Remove cosmetic features -- logos, textures, tiny fillits that don't affect structural behavior. Keep structural features -- ribs, bosses, snap fits, screw posts. Decide which components to model explicitly and which to represent as lumped masses. A circuit board with dozens of capacitors and resistors might be modeled as a plate with distributed mass rather than meshing every tiny component. The art is knowing what matters structurally and what doesn't.
Material definition. Every material needs density, elastic properties, and usually plasticity. For metals, define the full stress-strain curve including strain hardening -- and remember from Volume 1, that's true stress versus true plastic strain, not engineering values. For plastics, include strain rate dependence if possible. Polymers can be two to three times stronger at impact strain rates than at quasi-static rates. Ignoring this means you're underestimating the material's actual resistance to deformation at impact speeds. For rubber gaskets and seals, use hyperelastic models. And if you want to predict fracture, you need damage and failure criteria.
Meshing. Use C3D10M tets for complex geometry -- they mesh automatically and work well in contact, as we discussed in Volume 1. Use C3D8R hexes where possible for efficiency. Refine the mesh at impact corners and edges where stresses concentrate. And always run a mesh sensitivity check -- the same drop with two mesh densities. If peak stress changes by more than 10 percent, you need a finer mesh at the critical locations.
Setting up the drop. Model the floor as a rigid analytical surface -- it doesn't deform and doesn't need meshing. Apply an initial velocity to the entire product. For a drop from 1 meter, that's 4.43 meters per second. For 1.5 meters, 5.42 meters per second. Orient the model to the desired drop angle -- flat face, corner, or edge. Most test specifications require multiple orientations.
Contact. Set up General Contact between all surfaces -- this catches everything automatically. Define friction coefficients based on the actual material pairs. Typically 0.3 to 0.5 for plastic on a hard surface. Pay special attention to internal contacts -- circuit boards hitting housings, batteries shifting inside enclosures. These internal impacts often cause more damage than the primary floor impact.
Analysis configuration. Use Dynamic Explicit with a time period covering the full impact plus rebounds. For a typical phone drop, 5 to 10 milliseconds. Request field output every 0.1 to 0.5 milliseconds to capture the transient behavior. Request history output for energy quantities, contact forces, and accelerations at critical component locations.
Validation. Check the energy balance first -- always first. Artificial energy should be under 5 percent of the total internal energy. Verify the impact velocity by checking kinetic energy just before contact. Animate the deformation and make sure the physics looks right -- does it look like a real drop, or is something non-physical happening? Check contact to make sure there are no spurious penetrations.
Post-processing for design decisions. Identify peak stress locations and compare against material allowables. Check for permanent deformation -- is the housing cracked? Has a gap opened? Has a snap fit disengaged? Extract acceleration time histories at sensitive component locations and compare against fragility levels if known.
For correlation with physical tests -- and you should always correlate when test data is available -- place virtual accelerometers at the same locations as physical sensors. Peak acceleration should agree within 20 to 30 percent for a well-correlated model. Deformation patterns should match qualitatively -- if the physical test shows a crack at a screw post, your simulation should show high stress there, even if the exact stress magnitude differs.
A few advanced techniques worth knowing. A two-step approach -- running a short free-fall step first to gently establish contact, then continuing with the full dynamic response -- avoids initial penetration issues. For multiple drop orientations, automate by rotating the initial velocity vector and floor normal rather than rebuilding the model. And for products with internal mechanisms -- folding hinges, sliding trays, spring-loaded components -- model those mechanisms explicitly, because they often dominate the failure modes.
But before we move to that numerical topic, there is something important about how we interpret drop test results that deserves its own discussion.
Jerk and Fragility -- Beyond Peak G
When engineers evaluate drop test results -- whether from simulation or physical testing -- the first number everyone looks at is peak acceleration. How many g's did it hit? 1500? 2000? 3000? And for decades, that peak g number has been the primary metric for characterizing shock severity.
But peak acceleration alone can be deeply misleading. And if you design to peak g without understanding what really drives damage, you can end up with a product that passes every qualification test and still fails in the field.
Consider two shock pulses from drops of the same device at the same height. Pulse A shows 3000 g peak lasting 0.8 milliseconds. Pulse B shows 1500 g peak lasting 3 milliseconds. Which is more severe? Most engineers instinctively pick Pulse A -- higher g, more force. But Pulse B might cause far more damage, because its longer duration means its energy content is concentrated at lower frequencies that happen to align with the circuit board bending modes. The board flexes more, the solder joints strain more, and components fail -- all at half the peak g level.
This is why fragility assessment requires multiple parameters working together. Peak acceleration tells you the instantaneous force. Velocity change tells you the total energy that must be absorbed. The shock response spectrum -- which we covered in Volume 2 -- tells you how that energy is distributed across frequency. And there is a fourth parameter that most engineers overlook entirely: jerk.
Jerk is the rate of change of acceleration -- the third derivative of position with respect to time. If acceleration tells you the force, jerk tells you how suddenly that force arrives.
When a device hits a hard surface, the impact zone experiences near-instantaneous deceleration. But that deceleration does not spread instantly through the structure. It propagates as a stress wave, traveling at the speed of sound in the material. The rate at which that stress wave front develops -- its sharpness -- is governed by jerk. High jerk means a sharp, fast-rising stress wave. Low jerk means a gradual, spread-out wave front.
This matters enormously for brittle materials and rigid interfaces. Glass displays, ceramic capacitors, rigid adhesive bonds -- these do not have time to deform and distribute load when the stress wave arrives with a sharp front. They see an essentially instantaneous load, and they crack. The same total energy delivered with a lower jerk -- a softer rise time -- gives these materials time to engage their full cross-section in resistance, and they survive.
In practice, certain failure modes correlate more strongly with jerk than with peak acceleration or even SRS. Crack initiation in display glass, solder mask delamination, and fracture at rigid adhesive interfaces -- these are often jerk-dominated failures. A corner drop onto steel from a moderate height produces lower peak g but dramatically higher jerk than a face drop from a greater height. That is why devices sometimes survive severe face drops and then fail on seemingly mild corner drops.
So how do you get jerk data from your simulation?
In your explicit dynamic analysis, you already have acceleration time histories at every node. Jerk is simply the time derivative of that acceleration signal. In post-processing, you differentiate your acceleration history output with respect to time.
But -- and this is important -- numerical differentiation amplifies noise. A clean acceleration signal at 50 kilohertz sampling rate can produce a noisy jerk signal if you differentiate it raw. You need to filter first. Apply a low-pass filter appropriate to your frequency range of interest before differentiating, or use a smooth finite-difference scheme. The critical jerk metrics are peak magnitude -- which tells you the maximum rate at which stresses develop -- and jerk duration -- which tells you how long that rapid loading persists.
I have developed digital signal processing tools specifically for this kind of work -- extracting SRS, computing jerk, evaluating velocity change, and performing multi-parameter fragility assessment from both test data and simulation output. These are available at McFaddenCAE.com, alongside model evaluation tools that check whether your FEA models follow the best practices we discuss throughout this series -- things like energy balance checks, hourglass ratios, mass scaling limits, element quality, and unit system consistency. If you are building simulation models for drop test or shock analysis, those tools can help you catch common mistakes before you waste a long compute run on a model that has fundamental issues.
The full treatment of fragility assessment -- including case studies where peak g misled engineering teams, the role of pseudo-velocity SRS, cumulative fatigue damage from repeated drops, and a complete multi-parameter framework for characterizing device vulnerability -- is available as a separate in-depth discussion through the FEA Learning Center at McFaddenCAE.com. I encourage you to check that out if you work with drop-tested products. What we have covered here is the essential idea: peak g alone is not enough, jerk is the forgotten parameter, and real fragility assessment requires synthesizing multiple metrics into a coherent picture of what actually breaks and why.
Now, in all of this explicit dynamics work -- shock analysis, contact interactions, drop tests, jerk and fragility evaluation -- there is a numerical challenge that arises specifically when shock waves propagate through your mesh. That is our final topic.
Bulk Viscosity -- Taming Shock Waves
Bulk viscosity is an artificial damping mechanism that the explicit solver uses to handle shock waves. It's a numerical tool, not a physical property -- and understanding what it does gives you an important troubleshooting capability for any explicit analysis involving impact or blast loading.
Here's the problem it solves. When a high-speed impact creates a sharp pressure discontinuity -- a shock front -- the finite element mesh can't resolve it perfectly. A shock front in reality is essentially a step function in pressure: one side is at ambient, the other side is at enormously elevated pressure, and the transition happens over a distance comparable to the molecular mean free path -- effectively zero thickness. Your mesh elements have finite size. They can't represent a zero-thickness discontinuity.
Without any smoothing, you get numerical oscillations behind the shock front -- ringing that looks like noise in your stress and pressure results. These oscillations are purely numerical artifacts. They don't represent physical behavior, and they can contaminate your results and even cause elements to collapse.
Bulk viscosity smooths the shock front over a few elements, eliminating these oscillations while preserving the correct shock jump conditions -- the correct relationship between pressure, density, and energy across the shock. It's like applying a controlled blur to a sharp digital edge to remove aliasing artifacts while keeping the essential information.
Abacus uses two types.
Linear bulk viscosity, parameter b1, damps the oscillations that trail behind the shock front. The default value is 0.06, and it rarely needs adjustment. Think of it as cleaning up the ringing behind the wave.
Quadratic bulk viscosity, parameter b2, is the main shock-capturing mechanism. It prevents elements from collapsing by adding pressure proportional to the square of the compression rate. The default value is 1.2. This is what smears the shock front over a few element lengths -- typically 3 to 5 elements -- making it resolvable by the mesh.
For most analyses -- standard drop tests, typical impact events, moderate-speed crashes -- the defaults work well and you don't need to touch them. But there are specific situations where adjustment helps.
Increase b2 to 1.5 or 2.0 if you see ringing in pressure plots behind a shock front, or if elements are collapsing during high-velocity impact. More bulk viscosity means more smoothing, which stabilizes the solution but spreads the shock front over more elements.
Decrease b1 and b2 for quasi-static explicit analyses -- metal forming, stamping, slow crush simulations -- where you're using explicit dynamics for convenience but the event is slow enough that shock waves aren't really present. In those cases, the default bulk viscosity adds artificial pressure that contaminates your stress results. Reduce b1 to 0.03 and b2 to 0.6 or lower.
The diagnostic: check the viscous dissipation energy -- ALLVD -- in your energy output. If it's large compared to the total internal energy, bulk viscosity is significantly affecting your solution. For a typical drop test, ALLVD should be a small fraction of the total energy. If it's not, either your shock is very severe and needs the viscosity, or the values are too high for your application and need to be reduced.
One important mesh consideration: bulk viscosity smears the shock over about 3 to 5 elements. If your mesh is too coarse, the smeared shock front is physically too wide and you lose accuracy in the shock propagation. A good rule is 5 to 10 elements through the region where the shock wave passes. This is one more reason why mesh density at impact zones matters -- not just for stress resolution, but for accurate shock propagation.
Crossing the Boundary
Let me step back and look at where we are in the overall picture.
In Volume 1, we built the foundation -- units, materials, elements, mesh convergence. The DNA of the model.
In Volume 2, we explored the perturbation family -- modal analysis, harmonic response, random vibration, SRS. All linear. All elegant. All efficient. And all subject to the fundamental constraint: no contact, no material nonlinearity, no large deformations.
In this volume, we crossed that boundary. Shock analysis introduced the explicit time-domain world where transient behavior is the answer, not a nuisance. Contact formulations gave us the mechanics of surface interaction -- General Contact for the chaotic explicit world, Contact Pairs for fine control in implicit. Drop test workflow tied everything together into the most common applied scenario, from geometry simplification through correlation with physical tests. Jerk and fragility challenged the assumption that peak g tells the whole story -- real damage assessment requires multiple parameters, and the tools to extract and evaluate them are available at McFaddenCAE.com. And bulk viscosity addressed the specific numerical challenge of shock wave propagation.
But throughout all of this -- explicit dynamics, contact, material yielding, large deformations -- how do you know your results are trustworthy? How do you know the solver isn't adding artificial energy, or that numerical artifacts aren't corrupting your answer?
That's Volume 4. The quality assurance volume. Energy balance tells you whether the physics is conserved. Hourglassing reveals a silent numerical pathology that can invalidate results from reduced-integration elements. And mass scaling -- the most powerful and most dangerous shortcut in explicit dynamics -- can save enormous computation time or completely destroy your answer, depending on whether you respect its constraints.
Volume 4 is about trust. Building it, verifying it, and knowing when your results deserve it.