Part 6

 

ABAQUS INP COMPREHENSIVE ANALYZER

Under the Hood  —  A Deep-Dive Series

 

PART 6

Part Relationships, Penetration Checks,

and the Nearest-Parts Search

Spatial Intelligence — How Parts Relate in Space and Structure

 

Joseph P. McFadden Sr.

McFaddenCAE.com  |  The Holistic Analyst

 

© 2026 Joseph P. McFadden Sr. All rights reserved.


 

Setup — Why Assembly Relationships Matter

An assembly model is not a collection of independent parts sitting in the same file. It is a system — parts that touch, constrain, load, and influence each other. The structural behavior of any one component is inseparable from the constraints its neighbors impose on it.

The first five parts of this series described how the program reads the model, processes geometry, analyzes materials, exports sub-assemblies, and educates the analyst. All of that work treats parts as individual entities. Part Six is about the connections between them.

 

Three analysis tools in the Parts tab address those connections directly. The nearest-parts search tells you which parts are geometrically closest to the one you are looking at, and how far away they are. The interacting-parts search tells you which parts are explicitly connected by constraint definitions in the file. The penetration check tells you whether any parts are actually overlapping — occupying the same space — which would corrupt any simulation that runs on the model.

Together, these three tools give you a spatial and topological picture of the assembly that no amount of looking at a node table or element list can provide.


 

Section 1 — The Relationship Graph: What Was Built During Processing

In Part One of this series, we described the build_part_relationships function that runs at the end of model processing. That function produced a relationship graph — a data structure recording what the program could determine about how parts connect.

 

The graph has two inputs. First, the interface nodes dictionary: which node IDs are shared between which pairs of parts. Two parts that share nodes at their boundary are physically bonded at those nodes — they move together. The number of shared nodes is a proxy for the contact area between parts. Second, the interactions dictionary: the parsed record of every TIE constraint, contact pair, coupling constraint, and MPC definition in the file.

The graph structure is a nested dictionary. For each part name, there is a dictionary of related parts, and for each related part, a record of how they relate: whether they share nodes, how many, and whether they appear together in any interaction definition.

 

This graph is built once during processing and stored on the application object. The three relationship tools in the Parts tab query it, supplement it with live geometry calculations, and display the results. The stored graph is the fast path for known structural relationships. The live geometry calculations — nearest-parts distance and penetration check — go beyond what the graph can provide.


 

Section 2 — The Nearest-Parts Search: Count Mode versus Distance Mode

The nearest-parts search answers the question: given a selected part, which other parts are geometrically closest to it, and how far are they?

 

This is a purely geometric query — it does not depend on what the file says about which parts are connected. It computes distances from the actual node coordinates. Two parts with no constraint definition between them can still be physically adjacent. The nearest-parts search finds that adjacency whether or not the file records it formally.

The search has two modes, selectable with radio buttons in the Parts tab. Count mode returns the N closest parts, where N is a number you specify. Distance mode returns all parts within a specified distance threshold, measured in whatever unit system the model uses.

Count Mode

Count mode is appropriate when you want a ranked list regardless of how spread out the assembly is. Asking for the ten nearest parts gives you the ten closest parts by minimum surface-to-surface distance, regardless of whether the nearest one is 0.1 mm away or 50 mm away.

This mode is useful for assembly review: you want to see what surrounds a component of interest, who its neighbors are, and in what order of proximity they fall. The output is a ranked table with part names and distances, showing a touching indicator — distance approximately zero — for parts that share a surface.

Distance Mode

Distance mode is appropriate when you have a physical threshold in mind — a clearance tolerance, an interference fit specification, a minimum gap requirement. Asking for all parts within 0.5 mm returns only those parts that violate a 0.5-mm clearance around the selected part.

When distance mode returns zero results, the output message tells you what the nearest part actually is and how far it is. This prevents the frustrating experience of getting an empty result with no guidance on how to interpret the absence.

 

The results panel also stores the found part list internally so the action buttons activate: Select Results in Parts List loads all found parts into the multi-selection in the Parts tab listbox, and Select and View in 3D immediately renders the selected part plus all its neighbors in the 3D viewer as a color-coded assembly view. This connect-results-to-action pattern means a nearest-parts query is not just informational — it is the first step of a multi-part visualization or export workflow.


 

Section 3 — The Geometry of Distance: Surface Nodes and the K-D Tree

Computing the minimum distance between two parts sounds straightforward, but the implementation choices matter enormously for performance.

Surface Nodes, Not All Nodes

The first decision is which nodes to use for the distance calculation. A solid part with one hundred thousand elements has many interior nodes — nodes that are buried in the material and have no spatial relationship to the part's exterior surface. Computing distances from interior nodes to another part's interior nodes produces meaningless results: the interior of one part cannot physically interact with the interior of another unless they are actually overlapping.

The correct approach is to compute distances using only the surface nodes of each part — those nodes that lie on the exterior faces identified by the face counting algorithm from Part Two.

The find_surface_nodes function retrieves the exterior node set for a part. For parts where the face extraction has already been run — if you have previously viewed the part in 3D — the surface nodes may already be available. For parts not yet tessellated, the function computes the exterior faces on demand. The result is a set of node IDs that represent the part's outer boundary.

The Brute-Force Problem

Consider the naive approach. Take every surface node of part A — call that set M nodes. Take every surface node of part B — call that set N nodes. For every node in A, compute its distance to every node in B. The closest distance is the part-to-part distance.

The number of distance calculations is M × N. If both parts have ten thousand surface nodes, that is one hundred million calculations for a single part pair. For a ten-part assembly with ten pairs to check, that is one billion calculations. In Python, each of those is a floating-point square root operation. This approach is technically correct but practically unusable for any real assembly.

The K-D Tree Solution

The program uses a K-D tree — provided by the cKDTree class from scipy.spatial — to make this tractable. A K-D tree is a binary space partitioning structure that organizes points in k-dimensional space — three dimensions, in our case — such that nearest-neighbor queries can be answered in logarithmic time rather than linear time.

The tree is built from the node coordinates of part B. This construction takes O(N log N) time — where N is the number of nodes in B — and requires a one-time upfront cost. Once the tree is built, querying the nearest neighbor of any arbitrary point in three-dimensional space takes O(log N) time per query.

 

The key optimization is the batch query. Instead of querying one node from part A at a time, the function passes all M nodes from part A as a two-dimensional NumPy array in a single call to tree.query. The K-D tree processes all M queries simultaneously — or as close to simultaneously as the vectorized NumPy implementation allows — and returns an array of M minimum distances in one operation.

The result is then filtered: any distance below the tolerance threshold is flagged as a close node. The entire nearest-neighbor computation for one part pair — what would have been M × N scalar operations — is reduced to one vectorized batch query. On large assemblies with thousands of surface nodes per part, this is the difference between seconds and hours.

 

When scipy is not installed, the program falls back to a brute-force implementation capped at the first one hundred surface nodes of part A for sampling. The fallback produces an approximate result — the sample may not include the closest actual node — and logs a warning to the console. The result is still directionally useful, but the analyst is told explicitly that a slower, approximate method was used.


 

Section 4 — The Interacting-Parts Search: Reading the File's Explicit Connections

The nearest-parts search is a geometric query. The interacting-parts search is a structural query. It asks: what does the file actually say about how this part connects to others?

 

Three types of Abaqus interaction definitions are scanned: TIE constraints, contact pairs, and coupling constraints.

A TIE constraint bonds two surfaces together so that matching nodes on the surfaces move identically. The master and slave surface names are the identifiers in the file. The program checks whether the selected part's name appears as a substring in either surface name. A surface named Housing-1_TOP is likely the top surface of the Housing part. A surface named PCB-1_BOTTOM connects to the PCB.

Contact pairs define a potentially sliding or separating interaction between two surfaces. The same name-matching logic applies. The result shows the interaction type — TIE or CONTACT — the constraint name, and the connected surface name.

Coupling constraints connect a surface to a reference node, typically the center of a bolt hole or a connector attachment point. When the selected part's surface name appears in a coupling definition, the coupling's reference node is reported as the connected entity.

 

The name-matching approach is a practical necessity. Abaqus surface names are free-form strings — the file's author assigns them. There is no enforced convention linking a surface name to the part it belongs to. The program uses substring matching with case insensitivity as the best available heuristic: if the part name appears anywhere in the surface name, the interaction is considered relevant.

This will miss connections where the surface name bears no resemblance to the part name — a surface named SURF_001 belonging to the Housing part will not be found when searching for Housing interactions. The program acknowledges this limitation in the results display: when no interactions are found, the output tells you how many TIE, contact, and coupling definitions were scanned, so you know whether the absence of results reflects a genuinely unconnected part or an unresolvable naming convention.

 

The results panel also queries the stored relationship graph for any shared-node relationships that were built during processing. If two parts share interface nodes, that is reported here as a SHARED type interaction with the node count. This catches the bonded-mesh case — parts whose mesh nodes are literally the same nodes — even when no constraint keyword appears in the file.


 

Section 5 — The Common-Node Search: Set Intersection as Connectivity Proof

The common-node search is the most direct of the three relationship tools. It asks the simplest possible question: do any node IDs appear in both this part and that part?

 

In a well-meshed assembly, when two components are bonded — glued, welded, perfectly constrained — the mesher typically creates a conformal mesh at the interface: the two parts share the exact same nodes at their common surface. The node at the corner of the bond interface has the same ID in both parts' element connectivity tables.

The common-node search exploits this directly. It collects the full set of node IDs for the selected part — from all elements, not just the surface — and does the same for every other part. A Python set intersection — the & operator — returns the nodes present in both sets. If that intersection is non-empty, the two parts share nodes and are therefore directly meshed together.

 

The results are sorted by the number of common nodes in descending order, so the most strongly connected parts appear first. Each result shows the part name, the common node count, and the percentage of the selected part's total nodes that are shared — which gives a sense of what fraction of the part's boundary is bonded to each neighbor.

The top match also shows a sample of the first ten shared node IDs. This is useful for debugging: if you want to verify that the shared nodes are at the expected location in the model, you can look up those node IDs in the summary data and verify their coordinates.

 

The common-node search is most powerful for conformal meshes and orphan mesh assemblies where parts were meshed together with shared nodes at interfaces. For assemblies where parts were meshed independently and connected with TIE constraints or contact pairs — where the nodes at the interface belong to one part or the other but not both — the common-node search will correctly report no shared nodes. In that case, the interacting-parts search is the right tool.

Knowing which tool to use for which model type is part of the critical thinking framework this series is building. The tools are not interchangeable — each one addresses a specific connectivity model.


 

Section 6 — The Penetration Check: When Parts Occupy the Same Space

The penetration check is the most computationally intensive operation in the entire program. It addresses a failure mode that no amount of mesh quality checking or material verification can catch: two or more parts that physically overlap in space.

 

A geometric penetration means that nodes from one part are positioned inside the volume of another part. In physical reality, two solid bodies cannot occupy the same space. In a simulation model, they can — the finite element solver has no inherent protection against this. The elements of the overlapping region will be assigned conflicting material stiffnesses, contact algorithms will behave erratically, and the simulation will produce results that bear no relationship to physical reality.

Penetrations occur for several reasons. The most common is imprecise positioning during model assembly: two parts were placed with a nominal clearance of zero, but numerical rounding in the translation or import process moved one slightly into the other. Another common cause is user error in defining assembly transformations. A third is deliberate interference fits that were meant to be represented with contact but were accidentally set up as tied surfaces instead.

 

The penetration check detects these overlaps by finding nodes from one part that are within a tolerance distance of nodes from another part. The tolerance is set at 0.1 model units by default — this is not zero, because zero would miss near-tangent surfaces that are so close they might as well be touching, and it would miss meshing artifacts where nodes are at nominally identical positions but differ by floating-point precision.


 

Section 7 — The Penetration Check Architecture: Threading, Queues, and Heartbeats

The penetration check is architected as a fully threaded, cancellable, progress-reporting operation. This is the most sophisticated UI-concurrency implementation in the program.

Why Threading Is Non-Optional Here

For a ten-part assembly, the check requires 45 pair comparisons — the combinatorial count of choosing two items from ten. For a twenty-part assembly, 190 pairs. For a fifty-part assembly, 1,225 pairs. Each pair comparison involves building a K-D tree and running a batch nearest-neighbor query. On a large assembly with high node counts, this can take minutes.

If this computation ran on the main thread — the same thread that draws the interface — the entire window would freeze for the duration. The mouse cursor would stop responding. The operating system would eventually mark the application as unresponsive. The user would have no way to cancel.

The solution is to move the computation to a background daemon thread. The main thread stays free to redraw the interface, respond to mouse events, and — critically — receive progress updates and handle the Cancel button.

The Queue-Based Communication Pattern

The background thread needs to communicate with the main thread. It cannot update the UI directly — tkinter is not thread-safe. Attempting to call any tkinter widget method from a background thread can produce crashes or rendering corruption.

The program uses a Python queue as the communication channel. The background thread puts messages into the queue. The main thread reads from the queue on a scheduled timer — every one hundred milliseconds — and dispatches any pending messages to the appropriate UI update calls.

Three message types flow through the queue. A progress message carries the current pair number, total pair count, the names of the two parts being compared, and a time-remaining estimate. The main thread uses this to update the progress bar value and the status label text. A done message carries the complete penetration results. The main thread closes the progress dialog and opens the results dialog. An error message carries the exception string for any unexpected failure in the worker.

The Heartbeat Timer

There is a second timer running alongside the progress polling. The heartbeat timer fires every one second and updates a small gray label at the bottom of the progress dialog: elapsed seconds and pairs checked so far. This is deliberately separate from the main progress updates.

The reason is reliability. The progress queue updates come from the worker thread and depend on the worker producing updates at a sufficient rate. For very large pair comparisons — a single pair with millions of nodes on each side — the worker might be silent for a full second while running one batch K-D tree query. During that silence, the progress label would freeze, and the user would not know if the program was working or stuck.

The heartbeat has no dependency on the worker thread. It runs entirely in the main thread via tkinter's after method — the same interrupt-driven scheduling used for the processing progress dialog in Part One. The elapsed counter increments every second regardless of what the worker is doing. As long as the heartbeat ticks, the application is alive.

This is the interrupt-driven philosophy applied to user feedback. Do not wait for the worker to signal; instead, provide an independent heartbeat that cannot be blocked by worker slowdowns.


 

Section 8 — The Pre-Built Element Dictionary: O(1) Lookups

Before the penetration check worker thread starts, the main thread calls a function called build_element_dict. This function builds a dictionary keyed by element ID, mapping each ID to the element record. This pre-build step happens on the main thread, synchronously, before the worker starts.

 

Why does this matter? Because the worker thread needs to find elements by ID many thousands of times during the check. Without the pre-built dictionary, each lookup would require scanning the entire elements list — which might have tens of thousands of entries — until it finds the one with the matching ID. That is an O(N) operation per lookup.

With the pre-built dictionary, each lookup is a Python dictionary key access — O(1), constant time regardless of how many elements are in the model. The difference between O(1) and O(N) matters enormously when you are doing thousands of lookups inside nested loops.

 

The dictionary is built once and stored on the application object. It is cleared when a new file is loaded. If it already exists when build_element_dict is called, the function returns immediately without rebuilding — there is a guard check at the top. This means the first call to any function that needs element lookups pays the build cost; subsequent calls reuse the cached result.

This lazy caching pattern — build on first demand, reuse on subsequent calls — appears throughout the program for expensive one-time computations. It avoids the cost on startup while ensuring the result is always available when needed.

 

The element dictionary uses the same lookup that the node collection function uses: for each element ID in the part's element list, retrieve the element record from the dictionary, then collect the node IDs from the connectivity field. This is the O(1) path described in the code comments. The alternative — scanning the elements list for each ID — would make the node collection O(N) per element, producing O(N²) total for a part with many elements.


 

Section 9 — The Three-Layer Penetration Check Filter

The penetration check does not run the expensive K-D tree computation blindly for every pair of parts. It applies a three-layer filtering strategy that eliminates the vast majority of pairs before they reach the expensive computation.

Layer One — Node Count Guard

Before the worker thread starts, the total node count of all selected parts is computed. If it exceeds one hundred thousand, the user is warned and asked to confirm. This is not a hard limit — the user can proceed — but it is a signal that the computation may take several minutes and that a more targeted approach — using Suggest Pairs first to identify the likely problem pairs — would be faster.

Layer Two — Empty Set Skip

Inside the worker loop, before doing anything with a pair of parts, the function checks whether either part has zero nodes from the cache. This handles parts that were in the selection but have no geometric data — purely structural parts like MASS elements, or parts that failed node retrieval for any reason. Zero-node parts cannot penetrate anything and are skipped immediately.

Layer Three — Bounding Box Pre-Filter

The most important filter is the bounding box overlap test. For each part, the bounding box is computed: the minimum and maximum X, Y, and Z coordinates across all the part's nodes. The bounding box is the smallest axis-aligned rectangular box that completely contains the part.

Two parts whose bounding boxes do not overlap cannot possibly have penetrating nodes. Their closest points are on the surfaces of their bounding boxes, and if those boxes do not overlap — even with a tolerance padding — the actual parts are farther apart than the tolerance.

 

The bounding box overlap test is six comparisons — one per pair of faces in each axis direction. It runs in constant time regardless of how many nodes either part has. The K-D tree computation it avoids is orders of magnitude more expensive.

In a typical assembly with many parts spread across a volume, most part pairs are far apart. Their bounding boxes clearly do not overlap. The bounding box filter eliminates these pairs without touching the K-D tree, leaving only the small fraction of geometrically adjacent pairs for the expensive computation.


 

Section 10 — The Suggest Pairs Workflow: Bounding Boxes as a Pre-Screening Tool

The Suggest Pairs button is a standalone pre-screening tool that uses bounding box analysis to identify which part pairs are worth checking for penetrations, before running the full node-level check.

 

The motivation is practical. For a one-hundred-part assembly, the full penetration check examines 4,950 part pairs. Even with the bounding box filter running inside the check, this requires computing one hundred bounding boxes and evaluating 4,950 overlap tests. Then for the pairs that pass, K-D tree queries. On a large model this can take minutes.

The Suggest Pairs workflow separates the fast bounding box screening from the slow node-level analysis. It runs the bounding box overlap test across all pairs as a pre-computation, presents the list of potentially close pairs sorted by center-to-center distance, and lets you select which specific pairs to check in detail.

 

This workflow also runs in a background thread — it is not instantaneous for large assemblies since it has to build bounding boxes for all parts — but it is much faster than the full penetration check because it stops after the bounding box test.

The results dialog shows the close pairs with their center-to-center distances and offers a Select Top Five Pairs button that populates the Parts tab multi-selection with the parts from the five highest-priority pairs. One click goes from suggestion to selection. From there, clicking Check Penetrations runs the full node-level analysis on only those selected parts — a much smaller and faster computation than checking everything.

 

The critical thinking lesson here is about computational workflow design. The problem is too large to solve in one step efficiently, but it can be solved in two steps: first a cheap approximation that filters the search space, then an expensive exact computation on only the candidates that survived the approximation. This two-stage strategy — cheap filter followed by exact verification — appears throughout computational geometry and data science. Learning to recognize when it applies is a transferable skill.


 

Section 11 — Reading the Penetration Results

When the penetration check completes, the results dialog shows a header: either a green checkmark and No Penetrations Detected, or a red warning triangle and the count of part pairs with detected penetrations.

 

The summary panel shows: the number of parts checked, the distance tolerance used, the number of part pairs analyzed, and the total penetration count.

For each penetrating pair, the results show the two part names, the number of close nodes found, and the minimum distance between any node from part A and any node from part B. A list of up to ten sample node IDs from the penetrating region is shown.

 

Reading these results correctly requires understanding what they mean in context. A node count of one or two close nodes at a minimum distance near zero does not necessarily mean the parts are penetrating in a problematic way. It may mean that the parts share a common surface node at a corner — which is a conformal mesh, not a penetration. It may mean that two surfaces are tangent and the nearest nodes from both parts are at the tangent point.

A penetration is genuinely problematic when many nodes are close and the minimum distance is zero or negative across a spatially distributed set of nodes — when the overlap covers a region, not just a point. The sample node IDs are provided so you can look up those nodes' coordinates, verify where in the model they are, and make a judgment call.

 

This is the anti-black-box philosophy applied to the interpretation of results. The program tells you the numbers. It even tells you what the numbers likely mean — touching versus overlapping. But it does not make the final engineering judgment for you. You have the coordinates. You have the node IDs. You can verify.


 

Section 12 — Algorithmic Complexity: A Framework for Reasoning About Performance

This part of the series has introduced several algorithmic complexity concepts: O(1), O(N), O(N log N), O(N²). The following consolidates these into a framework, because understanding complexity is one of the most transferable skills in computational engineering.

 

Big-O notation describes how the running time of an operation grows as the input size N grows. It is not about the exact time for a specific input — it is about the growth rate.

O(1) — constant time. A dictionary lookup is O(1): it takes the same amount of time whether the dictionary has ten entries or ten million. This is the gold standard.

O(N) — linear time. Scanning a list of N elements to find a match is O(N): doubling the list size doubles the time. This is acceptable when N is small.

O(N log N) — linearithmic time. Building a K-D tree from N points is O(N log N). Sorting a list of N items is also O(N log N). The log factor grows slowly — log of one million is about twenty. So a million-node tree build takes roughly twenty million operations, not a billion.

O(N²) — quadratic time. Comparing every node in set A to every node in set B — brute-force distance search — is O(M × N). For two ten-thousand-node sets that is one hundred million operations. Quadratic scaling is dangerous: doubling the input size quadruples the running time.

 

The penetration check's performance story is entirely about avoiding O(N²). The element dictionary pre-build replaces O(N) list scans with O(1) lookups. The bounding box filter eliminates most pairs before they reach the O(N log N) K-D tree construction. The batch K-D tree query replaces an O(M × N) brute-force loop with an O(M log N) vectorized operation.

Every one of these optimizations was added in response to a real performance problem observed on a real model. The code comments in the penetration check section document the original O(N × M) performance characterization and the optimizations applied. Reading those comments alongside this explanation gives you the complete trace from problem to solution.


 

Section 13 — The Stop Flag Pattern: Cooperative Cancellation

Both the Suggest Pairs worker and the penetration check worker support cancellation via a Cancel button. The implementation uses cooperative cancellation — the worker checks a flag and decides to stop, rather than being forcibly killed.

 

The stop flag is a single-element dictionary: stop_flag = {'value': False}. The dictionary wrapper is used instead of a plain boolean because Python's scoping rules prevent a nested function from reassigning a variable from the outer scope. A mutable object like a dictionary can be modified from any scope, so setting stop_flag['value'] = True inside the cancel callback works correctly.

At the top of every major loop iteration in the worker — before processing a new part, before processing a new pair — the worker checks: if stop_flag['value'] is True, break. This check costs essentially nothing — it is a dictionary lookup followed by a boolean test. But it allows the Cancel button click to propagate into the running computation within at most one loop iteration's delay.

 

When the stop flag is set, the worker sends a cancelled message through the queue and returns. The main thread receives the cancelled message and closes the progress dialog without opening a results dialog. A brief informational message tells the user that the check was cancelled.

Forcible thread termination — which Python does not support cleanly — or OS-level process signals would produce unpredictable cleanup behavior. Cooperative cancellation via a shared flag is the correct pattern for cancellable background computations in tkinter applications.


 

Section 14 — The Complete Relationship Analysis Pipeline, End to End

Final numbered summary.

 

1.  During model processing, build_part_relationships assembles the relationship graph from interface nodes and parsed interaction definitions. This graph is stored and available for immediate queries.

2.  The user selects a part in the Parts tab and chooses a relationship tool.

3.  For nearest-parts search: find_surface_nodes retrieves the exterior node set for the selected part and each candidate part. Node coordinates are collected from the node coordinate dictionary. A K-D tree is built from the candidate part's surface node coordinates. The batch query computes minimum distances from all selected-part surface nodes to the tree. The minimum of those minimums is the part-to-part distance. All candidate parts are ranked by this distance, then filtered by count or distance threshold. Results are stored for the select-and-view action buttons.

4.  For interacting-parts: the interactions dictionary is scanned for TIE, CONTACT, and COUPLING entries. Substring matching against part names identifies relevant entries. The relationship graph is also queried for shared-node relationships. All found interactions are displayed with type, constraint name, and connected entity.

5.  For common-node: all node IDs from the selected part and each other part are collected into sets. Python set intersection finds shared IDs. Results are sorted by shared node count descending. The top match includes a sample of shared node IDs.

6.  For Suggest Pairs: a background thread builds bounding boxes for all parts. Overlap tests are run for all part pairs with tolerance. Close pairs are sorted by center-to-center distance and displayed. The Select Top Five Pairs action loads the Parts tab selection.

7.  For penetration check: the element dictionary is pre-built on the main thread. The background worker starts with a node cache pre-population step. For each part pair, the bounding box filter runs first. Pairs that pass the filter proceed to the batch K-D tree query. Close nodes — those within the tolerance distance — are flagged as penetrations. The heartbeat timer fires independently every second. Progress messages flow through the queue to the main thread. The cancel button sets the stop flag, which the worker checks at each iteration. Results or cancellation notification are sent via the done or cancelled message.

 

That is the complete relationship analysis pipeline — from the stored graph built at processing time through the live geometry queries that extend beyond what the file explicitly records.


 

Closing — Relationships as the Missing Dimension

A model made of parts without relationships is not an assembly. It is a collection.

 

The structural behavior of an assembly emerges from the interactions between its components — the load paths through the interfaces, the constraint forces at the joints, the contact pressures at the surfaces. A simulation that models each component correctly but gets the interface wrong is wrong in the most consequential way possible.

 

The tools in this part of the series — nearest-parts, interacting-parts, common nodes, penetration check, Suggest Pairs — are the spatial intelligence layer of the program. They answer questions that cannot be answered by reading keyword lists or property tables. They answer questions about how the assembly exists in space, which parts are next to each other, and whether the model's geometry is physically consistent before any solver ever sees it.

The computational techniques behind them — K-D trees, bounding box filtering, batch vectorized queries, cooperative thread cancellation, the pre-built element dictionary — are not esoteric algorithms. They are standard tools in computational geometry and scientific computing that anyone building tools in this space should know. They are described here not to show off the implementation, but because understanding why they are needed, and what they replace, is part of the critical thinking framework.

 

Part Seven covers the program's interface architecture — how the tabbed layout was designed, how the resizable panels work, how color themes are implemented, how font preferences are stored, and how the entire program is packaged for distribution as a standalone application using PyInstaller.

 

Source code, documentation, and all companion readers are at McFaddenCAE.com.

 

 

 

End of Part 6 — Part Relationships, Penetration Checks, and the Nearest-Parts Search

Next: Part 7 — Interface Architecture, Color Themes, Preferences, and PyInstaller Distribution

 

© 2026 Joseph P. McFadden Sr. All rights reserved.  |  McFaddenCAE.com

Previous
Previous

Part 5

Next
Next

Part 7