Wield Enabling Tool to Master CFD Meshing

The quality of essential simulation output depends absolutely on getting the mesh right. The kind of refinement and the type of mesh used should go by physics in place. For instance, a flow with dominant turbulence generation from the boundary layer separation needs a better focus on the boundary layer refinement to keep the Y+ = 1. A premixed combustion simulation in a SI engine will require an LES turbulence model and therefore a mesh refinement in the bulk to capture about 80% of turbulence kinetic energy there. Cases with moving parts in a fluid flow have to mesh with a priority to avoid negative volumes through dynamic meshing. Narrow gaps, a long list of parts in a big automobile assembly, etc.., are some overhead complexities demanding diligent craftsmanship.

This blog article shall apprise on the befitting advantages of “Ansys Fluent meshing” to thrive through such challenging meshing tasks

Endorsing motivators to adopt and benefit from Ansys Fluent Meshing

  • Generates polyhedral meshes, polyhedral prisms can easily uphold mesh quality for refined boundary layer regions.
  • Offers wrapping advantage to mesh large assemblies.
  • Parallel mode execution without using any HPC licenses, consistent speed scale-up.
  • Can run with both Solver and Pre-post license.

I choose to narrate some of my recent personal experiences as a CFD user to shed more light on these features of flair, which is worth a deep dive. Going by its craft, an electric motor meshing pursuit is the best fit to kick off this section.

While performing a Conjugate heat transfer analysis for an electric motor I faced few challenges with mesh generation. Fluent meshing with its guided task-based workflows and best in place algorithm helped in meshing complex geometry with good quality within less time & economical mesh count. In this post, I will be discussing the fluent meshing approach & how it helped with the pre-processing for motor thermal analysis.

For the electric motor analysis, I was able to achieve conformal mesh with good mesh quality but the mesh count was higher initially. Higher mesh count will consume more solving time & hence I was looking up for options that can help me reduce the mesh count & still preserve the mesh quality. One of the reasons for high mesh count was the proximity settings where the solids were also meshed with a fine sizing to maintain conformal mesh. So, a non-conformal mesh approach was utilized with solids having a bit coarse mesh which provided an advantage to generate a fine mesh to the fluid regions. With fine-mesh confined to the fluid regions, mesh count with a non-conformal approach was reduced to ~60%. But with non-conformal mesh, I had to invest time in assigning the mesh interfaces, which eventually consumed more time as multiple mesh interfaces are involved.

The same model was tried in fluent meshing under the conformal polyhexcore mesh approach, this time with prism boundary layers included. 8 cores were used for parallel meshing which exponentially reduced the meshing time. The mesh count was reduced to 40% compared to the tetrahedral mesh & the mesh quality was within the acceptable limits. For the boundary layer resolution as well, the polyhedral prisms helped in maintaining the quality compared to tetrahedral prisms within narrow air gap regions. Results for the 3 cases were compared & there was a good agreement. So, from this experience, I observed that fluent meshing can help in reducing the pre-processing time to a greater extent & I would highly recommend this approach. In the next part, I will be discussing more regarding various capabilities provided in fluent meshing which will highlight the extent to which fluent meshing can simplify pre-processing work.

Fluent meshing is developed with the capabilities to provide native polyhedral mesh which helps in reducing the mesh count while preserving the mesh quality. It is integrated with fluent to form a single-window workflow for CFD simulations. So, one can switch directly from fluent meshing into fluent setup, solution & post-processing module. Additional advantages are parallel meshing where one can utilize parallelization over multiple cores not just for accelerating solving but also meshing which can reduce the pre-processing time drastically. Polyhedral prisms can fit in narrow gaps without suffering distortion compared to triangular prisms which are good for boundary layer resolution. The task-based workflows provide a guided stepwise meshing approach using which one can setup meshing parameters & can edit them later if the mesh resolution is not as expected. Fluent meshing can perform conformal mesh for Watertight geometry by capturing all detail features within the geometry. In case of poor quality with surface or volume mesh, one can add a “improve mesh” option which activates the auto node movement option to improve the quality of mesh to the desired value. Apart from tetrahedral, hex-core & polyhedral meshing, Fluent meshing offers a unique option of polyhexcore or Mosiac meshing technology.

For More Information on task based Meshing watch the webinar

For dirty geometries with leakage and overlapping parts, an analyst has to spend hours preparing a watertight geometry for simulation. Fluent meshing is equipped with a Wrapping technology to capture the complex or dirty geometry. Developing a watertight geometry for components like engine or complex assemblies can be a time-consuming activity. Under hood analysis with complex & interfering geometries like engine, radiator & chassis can easily mesh with wrapping technology. Native cad file formats like STL & step or can be directly imported & part management can be performed to select the simulation model. While meshing a dirty geometry some complex features which have less impact on the simulation results can be overlooked using wrapping technology. In case the wrapper has missed any features of interest than using the edge feature extraction method one can fine-tune the wrapping function. Another option of Leakage detection allows the wrapper to patch the leakage in geometry below the provided threshold value thus eliminating the tedious need to work at the geometry level. Fault tolerant YouTube video.

For More Information on Wrapping Technology watch the Webinar

In a nutshell, fluent meshing can incorporate any type of geometry, and using the appropriate mesh strategy one can develop high-quality meshes with appreciable ease. Considering the type of complex products being deployed into the market, fluent meshing with wrapping technology and guided workflows is definitely the promising technology to reduce the overall pre-processing effort.

Share this on:

Open the Door to Material Optimization

Introduction:

The world is evolving at a quantum speed with transforming technology. The human life cycle has enriched to tend towards advancement in every sector. Businesses have adopted the transformation & obtained the benefits of technology involved in their mainframe process to become multi-billion companies. Making the product a hero in every story, Engineering the product or designing a physical product demands focus on four major factors; Design, Analysis, Manufacturing, and Materials. Though the first three factors are digitally transformed, the materials field is still lagging. This article exclusively focuses on optimizing your right material choices.

Let’s start with an example of a company facing a similar conundrum.

Background:

Referring to the case of an OEM, that manufactures an industry renowned product.

Due to the stringent industry standards and to stay ahead of the competition, the company decided to optimize their existing product and its performance. Following the advancement, they relied on simulation results for quick feedback from different design iterations. Furthermore, parametric optimization yielded them even better results compared to the manual design iterations. Overall, they were able to achieve a 9% Reduction in Weight and a 5 % Increase in Efficiency through Design Optimization.

Much to the Team’s surprise, the product manager wasn’t satisfied with this result. Hence, he assembled his team to initiate an experiment with different material types. 

The broader idea of this activity was to understand how he could improve his product and cut down costs to the company, thereby naming this activity as Material optimization.

Optimized Product = Design Optimization + Material optimization

Following the superior’s directive, the team started dedicating their efforts to bring the best out of current results, in terms of cutting costs, reducing development time & raising standards of performance. Four of his team members heading the R & D were investigating the case study with different ideas as below. 

The first member tried to use the existing available material data with him to see if he can get the best possible combination;

The second member tried to use the materials preferred by his company to avoid supply chain issues;

The third member tried to reach out to suppliers/consultants for a piece of advice on material choices; and

The fourth member tried to browse on the internet for material data.

However, none of the above approaches answered the below questions

  1. Which material to choose?
  2. Is there a better material choice available?
  3. Is there a cheaper solution? 
  4. Is the chosen data, reliable?
Capture

This is where companies need a tool like, Granta Selector which does answer the above questions.

Granta Selector is a tool that can help optimize your material choices, which not only has material properties of metals, plastics, polymers, ceramics and various other classes of materials but also has features like search, plot and compare your choice of materials as shown in the pic.

Apart from the features mentioned above, Granta selector has:

  • FE export tool to export simulation ready material data to most of the FE platforms
  • synthesizer tool to estimate material and process cost 
  • Eco Audit tool to estimate the environmental impact of the selected material at the early stage of the design.

To use how to use above-mentioned tools, click here to understand how Tecumseh, a global leader of commercial refrigeration compressors used Granta Selector to reduce development time by three-fold and saved millions of euros of cost savings from making the right materials decisions.

please click here for more information on Material optimization.

Please feel free to connect with us at marketing@cadfem.in or +91-9849998435, for a quick Demo on this Product.

Author Bio:

Mr. Gokul Pulikallu, Technical Lead-South

CADFEM India

Mr. Gokul Pulikallu has done his Bachelor of Technology  & he is carrying 9 years of experience in the field of structural Mechanics simulation and optimization. His main focus is Design Optimization & Material Optimization and helps customers adopt these technologies efficiently.

Share this on:

ANSYS Structures R19 – Release Update

This post discusses latest developments and enhancements in ANSYS Structures R19 applications. Maximize your RoI and productivity with the latest ANSYS release.

Today’s ever-changing and increasingly-competitive world makes life complicated for product developers such as you. Hence, you are perpetually in a race to launch better products and increase profitability.

In order to help you realize your product promise, we are glad to introduce you to ANSYS Structures R19 with various improvements and additions. As a result of the new release, you’ll find exciting and innovative technologies which make the development of complex products effortless with help of improved solver capabilities, better usability, integration of complex physical phenomenon and solver scale-up using HPC.

Enhanced Utility and Scale-Up

This year ANSYS brings in some radical changes to help you capitalize on your current and future ANSYS investments. Starting off, the following will enhance the utility of ANSYS for many applications and help in speeding up run time.

  • The inclusion of small sliding algorithms helps significantly reduce the time involved in contact detection by performing contact search only at the beginning of the analysis. So, this leads to faster solutions.

ANSYS Structures R19 Update

ANSYS Structures R19 Update

  • Additions to the user interface such as Selection Clipboard helps you save selection information intermittently. Hence you can retrieve it whenever necessary to define BCs, Named Selection, etc.
  • Material Plots help in visualizing material assignments to the components and also to have a holistic understanding of materials in the assembly.
  • Improvements in meshing and contact algorithm is another development. Therefore, this will lead you to a faster problem definition in the interactive environment of ANSYS Mechanical.

ANSYS Structures R19 Update
Speedup with DMP Scaling

ANSYS Structures R19 Update

  • Compute with 4 cores as default across entire ANSYS Mechanical (Pro, Premium, Enterprise) product lineup. Hence more value for your investments!
  • Achieve 3X scale-up using HPC with improved Structures R19 Solvers and utilize HPC Pack across entire ANSYS product portfolio. Therefore, your problems run faster and better!
Effortless Modelling of Complex Phenomena

Increased use of simulations across various industries requires engineers to simulate complex phenomenon. ANSYS Structures R19 helps make simulation of complex physical phenomena seem effortless.

  • SMART: Separating, Morphing, Adaptive and Re-Meshing Technology (SMART) makes simulation of Fatigue Cracks easy and interactive. SMART fracture capabilities simulate crack growth without the need for crafted meshes.
  • Coupled Physics: New 22X elements help in achieving a magnetic coupling of Structural and Thermal with Magnetic DOFs.
  • Enhanced FSI coupling allow faster data transfer between CFD and Structural Solvers.

ANSYS Structures R19 Update

ANSYS Structures R19 Update

ANSYS Structures R19: Other Noteworthy Enhancements
  • Additional data import from external models
  • Element Birth and Death as Native Mechanical Feature
  • Quick and Easy RST import
  • Improved NLAD-based Simulation for physics involving Large Strains
  • Beam to Beam Contacts
  • Higher Scaling in DMP

ANSYS Structures R19 Update

ANSYS Structures R19 Update

In conclusion, this article serves as a good foundation to further understanding. There is much more to learn about ANSYS Structures R19. Join us on April 12 for the ANSYS Structures R19 Update Webinar to get the details! Register now.

Share this on:

Robustness Evaluation – Why Bother?

This article will explain how ANSYS optiSLang can be used for robustness evaluation in virtual product development.

A successful product. Isn’t that the goal for every product company? It begins right from the step where engineers come up with world class product innovations to getting the right marketing mix that brings commercial success. Is every product successful? No. Is every product with a great design successful? Maybe.

The Symptom

Robustness Evaluation - Why Bother?
Courtesy: Android Authority

More often than not, we find market leaders stumble with product failures. The infamous Samsung’s Note 7 will come to your mind instantly. Hundreds of users were at the forefront of dangerous incidents where phones caught fire due to short-circuiting. Samsung conducted severe internal testing and several independent investigations. They found that, in certain extreme situations, electrodes inside each battery crimped, weakened the separator between the electrodes, and caused short circuiting. In some other cases, batteries had thin separators in general, which increased the risks of separator damage and short circuiting. Economics-wise, the incident caused Samsung to recall 2.5 million devices, lose over $5 billion and damaged its reputation.

Faulty Takata airbags’ inflators contained a defect that cause some of them to explode and project shrapnel into drivers and passengers. 50+ people worldwide lost their lives due to this design failure. 70 million Takata airbag inflaters were to be recalled at a cost of $9 billion to its automaker customers. For a Tier-I supplier, this liability was so huge that they filed for bankruptcy.

Such glaring errors after product launch, with severe economic implications, aren’t limited to Samsung and Takata alone. Honda, Michelin and many more companies have been involved in product recalls due to design failures.

Obviously, such design flaws need to be mitigated. Isn’t it?

The Probable Solution

To preempt design failures, today’s engineers use state-of-the-art engineering technology. Traditionally, product development teams used extensive prototyping and testing to validate design variants during the design life cycle. Of course, this is cumbersome, expensive and time-consuming.

Over the past few decades, engineering simulations have opened up a whole new range of possibilities for the design engineers. ANSYS, Inc., the market leader for engineering simulations, provides state-of-the-art technology to simulate systems involving mechanical, fluid, electrical, electronic and semiconductor components. With added insight, design engineers are able to test a lot more design variants on a virtual platform using this technology.

Consequently, the benefits – innovation, lowered cost of product development, higher product profitability and faster time-to-market. The staggering economic benefits and tremendous value on the offer have prompted several product companies to introduce simulations upfront using a Simulation-Driven Product Development approach.

Companies like Samsung and Takata were power users of engineering simulations. They used technology extensively in their design phase and perform virtual tests to validate designs. Only validated designs were put through production, QA and then sent off to the market. Despite simulating and validating designs, these companies witnessed monumental product failure in the market that caused loss of life, led to economic losses and damage to their reputation.

If they used simulation-driven product development, what went wrong?

The Cause

While the probable solution can mitigate and even eliminate design failures, there are other forces at play that you will need evaluate carefully. Hence it is imperative to understand the root cause for occurrence of design failures despite conducting extensive state-of-the-art simulations.

Many design engineers often undermine or do not consider one important aspect due to lack of proper understanding. Variability. Just as design parameters such as thickness or physical loads can be varied to test different design variants, some parameters display inherent variability.

Let me explain it with a material parameter: Young’s Modulus. If you’re an engineer by qualification, you would’ve come across the Universal Testing Machine (UTM) in your freshman or sophomore year of college. To test the Young’s Modulus of any given material (say steel), the UTM pulls a material specimen at extreme ends to create tension. Using mathematical calculations, you’ll arrive at a number close to 210 MPa as the Young’s Modulus of mild steel. Let’s say you repeat this test for 99 other specimens of the same material. Each test result will be different and it will never be the same. Other than the odd case of a faulty UTM apparatus, there’s only one reason for that. Natural Scatter.

The Hero: Robustness Evaluation

Such variability (statistical) will lead to variability in the performance parameters of the product. Obviously this is quite important and engineers need to assess designs for variability well ahead of product launch. For variability, you have only one way to assess designs for product failure or risks: Robustness Evaluation.

Robustness Evaluation with ANSYS optiSlang

The preferred choice of tool for robustness evaluation is ANSYS optiSLang. For better understanding, there is a lot of material available in more detail. Instead of reading, you may also want to consider watching these webinars here and here.

Can you attribute lack of design robustness to any other product failures that you have witnessed? Do you have alternate views? Please let me know in the comments section.

Share this on:

Maximize Fracking Profitability with ANSYS

This article explains how ANSYS and few other tools can be used to perform hydraulic fracturing, or commonly known as fracking, to reduce costs and increase profitability of shale gas projects.

Shale Gas

Shale gas is a form of natural gas trapped within shale formations. Because of its abundance, shale gas is a lot cheaper than it has been in years. Hydraulic fracturing or fracking helps in extracting it efficiently.

According to American Enterprise Institute, “the direct benefit of increasing oil and gas production includes the value of increased production attributable to the technology. In 2011, the USA produced 8.5 trillion cubic feet of natural gas from shale gas wells. Taking an average price of $4.24 per thousand cubic feet, that’s a value of about $36 billion, due to shale gas alone.” As a result of increase in fracking, natural gas imports in United States reduced by 25 percent between 2007 and 2011.

What is Fracking?

The term simply means creating fractures using hydraulic fluids. In this technique, production teams pump huge volumes of water and proppant at high pressure into the gas well. They also mix a few chemicals, which improve fracking performance, along with the water during pumping. Shale layers, being less permeable, minimize the flow of the natural shale gas trapped.

Fracking is useful in creating a connected fractured network between pores of the rock through which natural gas escapes out. In the first step, production teams drill horizontally along the shale layers. From the perforations, specialists pumps water into the rock. Since water is sent in with high pressures, the shale layers fractures. Once the pressure is decreased, they retrieve water from the shale layers leaving behind sand particles. However, the proppant dwells in the rock layers keeping the cracks open thereby allowing gas to escape.

Benefits and Disadvantages of Fracking

Fracking helps in accessing the natural shale gas trapped deep down beneath the earth. With traditional methods of extraction, we cannot exploit this energy potential. Recently-developed methods of vertical and horizontal drilling added favor to fracking. They permit drilling thousands of feet deep inside the ground in order to access the trapped shale gas.

It is said that shale gas causes lesser air pollution when compared to other dirty fuels like coal and oil. However there are ways in which fracking itself can cause more devastating effects such as air emissions and climate change, high water consumption, water contamination, land use, risk of earthquakes, noise pollution, and health effects on humans.

Economic Benefits of Simulation

To achieve an optimal design for a gas well, standard industry practice is to conduct a large number of field trials that require high capital investment and time which significantly increases the project value.

In order to obtain a profitable production of shale gas, I recommend you to use a fully coupled 3D hydraulic-mechanical simulation. Obviously the costs of such simulation are a lot lower than traditional methods. Many of our customers in the Oil & Gas industry have yielded better output with a higher project profitability.

You can find the schematic view of simulating Hydraulic Fracture below.Schematic view for fracking simulation

Essential Pre-Requisites for Simulation

We gather the input data for simulation from different physics such as geology, petrophysics and geomechanics. From the geology of the rock structure, we extract the lithology and layering, altitudes of beddings and natural fracture data. Accurate determination of petrophysical properties for both the reservoir and fluid contents is necessary. We also need to consider features like porosity, permeability and saturation for the reservoir. It also includes evaluating the properties that help in determining the hydrocarbon concentrations in the reservoir and its ability to produce the gas.

Along with the surface and sub-surface properties of the rock, the in-situ stress parameters also have same importance in simulation. I also account for elastic properties and strength parameters of intact rocks. The geomechanical studies of the rock structure also reveal the strength parameters of natural fractures, if any. Using multiPlas, I model these rock-specific material parameters and joints.

Of course, gathering this data can look daunting to you. However our expertise combined with strengths from Dynardo GmbH – the leading global experts in simulation of hydraulic fracturing – can help!

Fracking Simulation – Readying the Model

In the simulation of fracking process, I use a sequential coupled hydraulic-mechanical modeling approach. Therefore, I construct two models – a hydraulic flow model and a mechanical model simultaneously.

3D model with different soil layers for fracking simulation
3D model with different soil layers

To account the strength and stress anisotropies of the rock structure, I need to consider a 3D model. These variables help us to constantly monitor the behavior of fracking process. To capture the anisotropic nature of the rocks, you’ll need strength and stress anisotropies of the rock matrix and fracture system.

Sequentially Coupled Hydraulic-Mechanical Analysis in ANSYS

In ANSYS Mechanical, we start with a transient hydraulic flow analysis (analogous to transient thermal analysis) to understand the pore pressure field. The pressure increases in the fracture-initiated locations due to the pumping of fluid and low permeability of rock. If the pressure is large enough, the rock starts to fail and fractures open up. As a result, the permeability of the rock structure increases and changes the pressure distribution in the hydraulic flow model. From a mechanical perspective, pressure increase changes the effective stresses within the rock. After every fluid time increment, change in the mechanical forces from pore pressure change will be introduced into the mechanical analysis. The forces on every discretization point of the smeared continuum are computed from the pore pressure gradient.

I setup the coupling inside ANSYS in an explicit manner. Consequently, one iteration cycle is performed for every time step. The time step needs to adequately represent the progress of the fracture growth. At each time step, a transient hydraulic flow analysis starts first. Then the mechanical analysis with the updated pressure field from the hydraulic flow model is conducted. The mechanical analysis results in updated stress, plastic strain fields and hydraulic conductivities. i apply the updated hydraulic conductivities to the hydraulic model in the subsequent time step.

Crack expansion in the model while performing fracking analysis
Crack expansion in the model

In mechanical analysis, the development of fractures is represented by a plastic model in ANSYS. As a result, I cannot directly measure fracture openings and hence I’ll need to calculate it based on the plastic strains.

Model Calibration & Optimization of Fracking Paramaters

Because of large number of statistically-varying and reservoir parameters, the reservoir model needs advanced calibration procedure. At first, I will need to calibrate numerical parameters such as maximum permeability of open joints or energy dissipation at pore pressure frontier.

After calibration of all the parameters, I identify the most important parameters contributing to maximum crack volume using optiSLang software. As you will recognize, maximum crack volume correlates to maximum shale gas output. I validate the behavior of such important parameters and then calibrate the analysis model to the field measurements. I use the calibrated model later in order to optimize the simulated volume and predict the gas production rate of the wells.

Summary & Outlook

Evidently, application of simulation to the fracking process will underline its predictability. Simulation cut downs the costs of field trials, brings down the time-to-market thereby significantly increases the project profitability.

If you’re into gas exploration, you should contact us filling this form or by writing to sales@cadfem.in. We’ll be glad to explain some of our recent projects that have benefited customers in Oil & Gas industry.

Share this on:

Demystifying Modal Analysis (Part I)

In this article, I will discuss about modal analysis – a topic that is standard, however I’ll strive to demystify it using a simple example and FAQs.

Motivation for Modal Analysis

As a mechanical engineer, life is always interesting because I can correlate the knowledge gained from books to real life scenarios. As a student, my professor gave a real example of a bridge failure due to marching soldiers. What followed was a very interesting lecture about dynamics. Until then, I never understood the power of the words such as dynamics, vibration and resonance. Of course, the example provided food for my thoughts to study more about how a bridge could fail due to lesser dynamic load compared to a heavier static load.

For those of you who are curious, the bridge was England’s Broughton Suspension Bridge that failed in 1831 due to the soldiers marching in step. The marching steps of the soldiers resonated with the natural frequency of the bridge. This caused the bridge to break apart and threw dozens of men into the water. Due to this catastrophic effect, the British Army issued orders that soldiers while crossing a suspension bridge must ‘break step’ and not march in unison.

Such failure has given rise to more emphasis on analyzing the structure (mechanical or civil) for dynamic loads if it undergoes any sort of vibrations. Traditionally test equipment have been used to experimentally monitor vibrations in new designs; this is costly however. We apply finite element analysis (FEA) to solve such problems. FEA solvers have evolved and today’s solvers are powerful not only in statics but in dynamics too.

Demystifying Modal Analysis

Modal Analysis: Getting Down to the Basics

In any dynamic/vibration analysis, the first step is to identify the dynamic characteristics of the structure. This is done through a simple analysis called Modal Analysis. Results from a Modal Analysis give us an insight of how the structure would respond to vibration/dynamic load by identifying the natural frequencies and mode shapes of the structure.

Modal Analysis is based on the reduced form of dynamic equation.

Demystifying Modal Analysis

As there is no external force acting and neglected damping, the equation is modified to:

Demystifying Modal Analysis

I have skipped the derivation part of natural frequency as it is easily available in textbooks. Natural frequency is substituted back into the equation to find out the respective mode shapes. These natural frequencies are the eigen values whereas the respective mode shapes are its eigen vectors. Natural frequencies & mode shapes in combination are called as modes.

Eigen vectors represent only the shape of deformation, but not the absolute value. That’s the reason it is called as mode shape. It is the shape the structure takes while oscillating at a respective frequency. Important point to remember is the structure has multiple modes and each mode  has a specific mode shape. If any load is applied with same frequency as natural frequency in the same direction as mode shape, then there will be increase in magnitude of oscillation. With no further damping, the scenario will lead to a failure due to a phenomena called resonance. To avoid this phenomena in dynamics, calculating the modes carries great importance.

Frequently-Asked Questions

Having said that, questions will certainly arise. In my opinion, these are the most commonly asked questions in support calls by customers using ANSYS.

  • Why do frequencies from simulation don’t match the test results?
  • Why are deformation and stresses in modal analysis very high?

From equation (3), it is clear that natural frequency of structure depends on its stiffness and mass. In order to accurately capture frequencies in FEA, the following points are important for you:

  • You need to capture mass of the structure and connecting/ignored members accurately.
  • Your mesh can be coarse, but enough refinement so that you can accurately capture the stiffness of the structure. If you are interested in the local modes in slender members, then you’ll need to perform local mesh refinement.
  • You need to define appropriate boundary conditions in forced modal analysis in order to capture realistic frequencies.
  • You need to accurately model the contact between different bodies in an assembly since they affect the stiffness of the structure drastically.

For the second question, a lot of confusion exists when the modes extracted in modal analysis show deformation magnitude. In Equation (2), you will see that no external load is applied on structure. This will make you wonder where these values come from? Let’s have a look with an example of simple cantilever beam.

Demystifying Modal Analysis
Fig. 1 – Mode shape & stress shape of Cantilever Beam

Fig. 1 shows its extracted mode shape 1 & stress shape 1 from modal analysis. I observe deformation to be as high as 253 mm and stress as 4,914 MPa which is far greater than the ultimate strength of Steel i.e. 500 MPa. You may wonder, why did we get these high values?

This happens because the FEA solver returns the mode shape (not the deformation magnitudes) as output. By this, I mean that magnitude of the mode shape is arbitrary (as seen in Fig. 1). The high value is because of a scale factor that’s chosen for mathematical reasons and does not represent anything real for the model. However this value helps us in relative measurement. Let’s take the example of the first mode. Maximum deformation occurs at the free end compared to any other location. This changes with the change in mode.

Since we have deformation, you can compute corresponding stresses and strains. Once again, these are relative values. If you ask the FEA solver for stresses & strains, it will use the same scaled deformation magnitudes and calculates stresses & strains. They are referred to as stress shape & strain shape (not to be confused with stress state or strain state) because no loads are applied. The magnitude of stresses and strains are useless but their distributions are useful to find hot-spots in the respective modes.

Conclusion

Modal analysis offer much more than just the frequencies and mode shapes. This analysis is primarily the stepping stone for linear dynamics studies to calculate the actual deformation due to different kinds of dynamics loads. Modal analysis has many secondary applications which I will discuss in my next blog.

 

Share this on:

Drop Test Analysis with Bolt Pre-Stresses

This article introduces a time-saving and a smart approach for drop test analysis with pre-stresses using LS-Dyna. 

Many products that are subject to handling during transport, installation, or repair are at risk of being dropped. Granted, handlers generally try to avoid these types of mishaps. When equipment is out of your hands, its safe transportation is out of your control. One way to ensure that your product survives its journey from the factory to the point of installation is to perform drop test analysis and verify that it survives without damage. That way, your company isn’t answering warranty claims from customers who received damaged goods that left your warehouse in mint condition.

Although the methodology for drop test is fairly standard, it is challenging to capture the finer details that happen in reality. This article introduces a time-saving and a smart approach for drop test analysis with pre-stresses using LS-Dyna.

Motivation

While drop test problems involving huge appliances, the effects of bolt-load or pre-stresses are generally ignored. However, in some cases, it is desirable to have a pre-stress loading of a structure before performing a transient dynamic analysis or, simply, drop test analysis. This is because nowadays the product safety has increased the demand for accurate simulation models.

In this article, I used LS-DYNA. It is a highly advanced, general-purpose, nonlinear, finite element program that is capable of simulating complex real world problems.

Firstly engineers need to perform a pre-stress analysis for the bolts before conducting the drop test analysis. Then you will need to integrate the stresses and strains obtained from the pre-stress analysis into the drop test analysis setup.

Drop Test with Included Pre-Stresses (two-step method)

In LS-DYNA, I define bolt pre-load (non-iterative loading type) using *INITIAL_AXIAL_FORCE_BEAM (Type 9 beams only) and *INITIAL_STRESS_SECTION (solid elements only). These keywords work with *MAT_SPOTWELD. The failure models apply to both beam (Type 9) and solid elements (Type 1).

*INITIAL_AXIAL_FORCE_BEAM will pre-load beam elements to a prescribed axial force.

Screenshot of keyword in drop test analysis

In the above screenshot of the keyword, BSID is Beam Set ID. I define the preload curve (axial force vs. time) with *DEFINE_CURVE. LCID is the Load Curve ID.

The below video show the pretension in the beams.

*INITIAL_STRESS_SECTION will pre-load a cross-section of solid elements to a prescribed stress value. Pre-load stress (normal to the cross-section) is defined via *DEFINE_CURVE.

Screenshot of keyword in drop test analysis

In this screenshot placed above, ISSID is section stress initialization ID,  CSID Cross-Section ID, LCID Load Curve ID (pre-load stress versus time), PSID Part Set ID, VID Vector ID (direction normal to the cross section). You can define the vector if *DATABASE_CROSS_SECTION_SET is
used to define the cross section.

In the video, you can see the pre-stresses in solid elements when I used *INITIAL_STRESS_SECTION.

Video Courtesy: LSTC

*INTERFACE_SPRINGBACK_LSDYNA allows LS-DYNA to create a DYNAIN file at the end of the simulation containing deformed geometries, residual stresses, and strains. This file sets me up for the next phase of analysis where I use it with the *INCLUDE keyword. However, the DYNAIN file neither includes contact forces nor contains nodal velocities. These quantities from the pre-stress analysis do not automatically carry over to the drop test.

Drop Test with Included Pre-Stresses – Both in One Step!

In the previous method, there is always manual intervention which can lead to unknown errors. Drop test of an appliance by considering pre-stresses in one step can be specified by using *DEFINE_TRANSFORM.

*DEFINE_TRANSFORM allows to scale, rotate and translate the appliance and you must define before you use the *INCLUDE_TRANSFORM command.

Screenshot of keyword in drop test analysis

Screenshot of keyword in drop test analysis

In the above screenshots, TRANSID refers to Transformation ID that is available in *DEFINE_TRANSFORM and the part which is specified in the file name will include the TRANSID.

The video below shows the drop test analysis of an appliance where *DEFINE_TRANSFORM allows the appliance to pre-stress and then the actual drop happens. The von-Mises stress contours show that the stresses get developed in the parts due to pre-stressing of the beams before the actual impact.

Saves time!

Using this approach, I can save about 20% of the time required to setup a pre-stress analysis and drop test analysis together. In addition, we can eliminate manual intervention.

Thanks to this, I get to submit my simulation jobs to the solver before I head into the weekend. I return in the following week to view and post-process the final results.

If you want to learn more about this, please talk to us.

Share this on:

How Fatigue Made Me Fall From The Chair?

This article explains the setup of a simple fatigue analysis in ANSYS Workbench using an example. For beginners, this article demystifies fatigue analysis.

Context

When I was ten years old, I was fond of a chair which was small and easily movable. After school, I used to sit on it and watch Aladdin tales on the television. One day, as usual, I sat on it. Suddenly the chair got broke in half and I fell on the floor in front of my sister. For obvious reasons, I got embarrassed and my sister made fun of me the whole day. I slept that day with few unanswered questions.

Why did the chair fail when it was working fine for a few years? Why didn’t it fail on the first day I sat on it?

Illustration of a broken chair as a result of fatigue
My broken chair! 🙁

Motivation

Fast forward to my engineering days, I was told that cyclic loading on any structure can make that structure fail – fatigue failure. Only then I could understand why my beloved chair failed.

Many of you might have heard stories like the one mentioned above or even experienced it yourself. However, the fact that majority of structures irrespective of their size experience a phenomenon like fatigue is real. If a simple structure with a simple load cycle could fail because of fatigue, imagine a complex structure with a complex loading cycle. Yes, the consequences are catastrophic for the manufacturer as well as the user.

According to NBS report, “between 80-90 % of all structural failures occur through a fatigue mechanism.” Incorporating fatigue simulation upfront into the product development cycle plays a vital role in optimizing the structural integrity of your product and it significantly reduces the cost of failure.

In this article, a simple fatigue analysis is shown which was carried out using ANSYS Fatigue Tool. If you wish to conduct the analysis as per FKM guidelines, you’ll be interested this CADFEM ANSYS Extension.

Workflow

For a fatigue analysis, static structural or transient analysis is a prerequisite. To achieve this, I consider a simple chair geometry for static structural analysis; appropriate loads and boundary conditions were defined. I define a point mass of 75 kg to act on the chair. This loading can be considered as a misuse for a child’s chair. Resultant static stress (24 MPa) did not exceed the yield strength (54 MPa) of the assigned material.

There! I got the answer to one of the questions from my story. The chair didn’t fail on the first day I sat on it because the load applied on the first day was not sufficient enough to exceed the yield strength of the material.

Analysis setup for fatigue study
Loads and Boundary Conditions

Results of static structural analysis before fatigue analysis
Equivalent von-Mises Stress

 

 

 

 

 

 

 

Setting up the analysis

Subsequent to the setup of static structural analysis, I launch the ANSYS Fatigue Tool using the following steps.

Setting up fatigue analysis
Solution>Insert>Fatigue>Fatigue tool

Analysis Type

ANSYS Fatigue Tool offers two methods to calculate fatigue life.

  • Strain Life
  • Stress Life

While strain life approach is widely used, at present, because of its ability to characterize low cycle fatigue (<100,000 cycles), stress life approach addresses high cycle fatigue (>100,000 cycles).

Specifying details in the fatigue tool
Details View of Fatigue tool

I chose the stress life approach to execute this example and subsequently I defined the appropriate S-N (Stress–Cycles) curve in the engineering data.

Loading Type

Contrary to static stress, fatigue damage occurs when stress at a point changes over time. Therefore, it is essential to define the way the load could repeat after a single cycle, in other words the type of fatigue loading determines how the load repeats over time.

Accordingly, I chose zero-based loading type for the current example, which means I apply the load and remove it, thereby resulting in an equivalent load ratio of 0. For a fully-reversed loading, I would apply a load and then apply an equal and opposite load which will result into a load ratio of -1.

Applying zero-based loading in fatigue analysis
Zero-Based loading

In both the cases the amplitude of load remains constant. Therefore looking at the single set of simulation results will give you an idea where fatigue failure might occur.

Mean Stress Theory

Now that I have defined analysis and loading types, I need to choose a mean stress theory.

Zero Mean Stress loading for fatigue analysis
Zero Mean Stress loading

Mean stress is the average of maximum and minimum stress during the fatigue load cycle. Mostly, fatigue data is assumed for zero mean stress, which means fully reversed loading. However, fully reversed loading conditions (zero mean stress) are rarely met in engineering practice. Hence Mean Stress Correction Theory has to be chosen to account for mean stress.

For stress life approach: If experimental data at different mean stresses exist, I can account for the mean stress directly by interpolating different material curves. However, it is unlikely to have experimental data at all mean stresses. Therefore, several empirical relations are available including Goodman, Soderberg and Gerber theories which use static material properties (yield strength and tensile strength) and S-N data to account for mean stress. In general, I don’t advise you to use empirical relations if multiple mean stress data (S-N curves) exists.

Different Mean Stress Theories for Fatigue Analysis
Different mean stress correction theories (Goodman Theory is highlighted)

Goodman Mean Stress Theory is a common choice for plastic materials, whereas Gerber Theory is a common choice for ductile metals. For the current analysis, I chose the Goodman Theory.

Fatigue Life

Like any other result in ANSYS Workbench, fatigue life can be scoped on a geometric entity. For stress life with constant amplitude loading, life at that point will be used if the equivalent alternating stress is lower than the lowest alternating stress defined in the S-N curve

For this example, 3,100,000 cycles is the expected life of the chair. This means that a person of 75 kg can sit on this child’s chair for 3.1 million times. If he ignores and continues to sit beyond the expected life, very soon he might face the same fate as the boy in the story.

Fatigue life extracted from ANSYS Fatigue Module
Fatigue life extracted from ANSYS Fatigue Tool

Conclusion

Wasn’t it easy? Yes, it is easy to perform this analysis provided you have the material data. In case you are not aware, ANSYS Mechanical Pro, Premium, Enterprise and ANSYS AIM offer ANSYS Fatigue Tool.

What are you waiting for? Start realizing your product promise using ANSYS products.

P.S. Just in case you were wondering what happened after the chair broke, my mother bought us a brand new chair the next day!

Share this on:

Modeling Thermostats in ANSYS Workbench

This article talks about modeling thermostats in ANSYS Workbench using the COMBIN37 element that is quicker, sophisticated and automated. While working on a customer project, I struggled with the conventional approach in ANSYS 17.2. It was frustrating, and so I decided to look up for possible alternatives in the ANSYS Customer Portal. I found a solution using COMBIN37, however it only featured the option for switching OFF the heat source. Next I stumbled upon PADT’s ACT Extension, but again it didn’t seem to be useful for the problem described in this article. Thanks to the able support from ANSYS staff, I was able to obtain this solution.

Thermostats are used in many applications. For analyst, therefore, it is essential to understand the type and functionality of the component that she wants to replicate in the simulation environment. Before anything else, let me brief you about thermostats.

What is a thermostat?

Wikipedia defines thermostat as:

A component which senses the temperature of a system so that the system’s temperature is maintained near a desired set point.

Thermostats are widely used in varied industrial applications, however they are primarily used in heating and cooling systems. Air-conditioners, refrigerators, automotive coolant control, electric iron box, actuators, control valves are a few of the many applications. Thermostats are also used in many manufacturing processes to maintain the desired temperature limits.

Let us model it in ANSYS Workbench, shall we?

Problem definition

For this article, I selected a small block that is initially at room temperature. Also, this block is heated up by a source on the bottom face (possibly, a heater) that inputs 2W for 10 minutes. The objective is to maintain the temperature at a certain point (indicated with a red label; hereinafter referred to as sensor) on the body within 170-175°C.

Modeling Thermostats: Block geometry with a label for sensor
Block geometry chosen for this article

Possible solutions

In a simulation environment, there are two ways to model a thermostat.

Many beginners will adopt a conventional approach. In this approach, heat is fed to the surface until the temperature rise at the sensor reaches 175°C. However, the analyst doesn’t immediately recognize the time when the sensor attains 175°C. So she:

  • lets the computation run until the desired temperature is obtained
  • records the time at which this temperature is attained
  • divides the time dependent loads into number of load steps to control the switch ON/OFF
  • finally runs the analysis again until the temperature at sensor falls down to 170°C

This cycle continues for the entire analysis duration, however this example runs only for 10 minutes. For real problems, it is tedious and time-consuming to simulate the thermostat functionality to regulate the temperature with this procedure.

Modeling Thermostats with COMBIN37

Another solution for modeling thermostats is much quicker, more sophisticated and an automated technique. In order to make it work, it requires to build a connection between the sensor node and the heat source face to regulate the temperature. We introduce a temperature regulator that is modeled with element COMBIN37 in ANSYS.

Modeling Thermostats: Illustration depicting two humans aiming to plug one cable into another.

I’ll help you understand COMBIN37 – it is a unidirectional control element that has a capability to turn OFF and ON during the analysis. COMBIN37 has one set of active nodes (I,J) and one set of control nodes (K,L –optional nodes). Each node has one degree of freedom (DOF) that is valid for structural (translations/rotations/pressure) and thermal analysis (temperature). COMBIN37 has many other applications; you can find detailed information in the ANSYS Help manual.

Unfortunately, there is no direct way to model this in ANSYS Workbench. Of course, this entails a set of APDL scripts to run in the background. Simply put, 2 nodes, I & J of COMBIN37, are connected to the heat source face and sensor node respectively. Three arguments are defined using scripts – the first two define the range of temperatures on the sensor node and the third defines the heat flow from the source to be operated within the range defined – that indirectly defines the ON/OFF behavior.

Modeling Thermostats: Image showing the thermostat model used for this example. Heat source to switch off when temperate is 175 degree Centigrade and switch on when it drops to 170 degrees.
Operating range of the thermostat

Create COMBIN37 using commands

Let me now take you to the step-by-step procedure describing the commands used to model a thermostat in ANSYS Workbench.

Step 1

Since we’re creating an element that is possible only in pre-processor, you must enter the pre-processor.

/prep7                                                  ! Enter the Pre-processor

Step 2

Define the arguments for the magnitudes of temperature range within which the heat source should supply the heat. This definition of arguments will allow you to parametrize the temperature peaks as well. Parametrization allows for various studies such as sensitivity analysis, optimization or robustness evaluation in ANSYS Workbench. This can become important in coupled physics problems.

on_val=arg1                                       ! Set on_val to arg1

off_val=arg2                                      ! Set off_val to arg2

Step 3

Select the sensor node and obtain its node ID

cmsel,s,sensor                                    ! Select the named selection consisting sensor                                                                       node or vertex

sensor_node=ndnext(0)                 ! Get the node id

nsel,all                                                  ! Select back all nodes in the data base

Step 4

Create material ID for COMBIN37 element, define the type of DOF, ON/OFF behavior and input the third argument for heat flow with appropriate key options and real constants.

*GET,max_et,etyp,0,num,max        ! Get highest type attribute in the model.                                                                                  We increment this to ensure we are using                                                                                new, unique id for COMBIN37

et,max_et+1,37,,,8,,1                            ! Control element using temperature DOF and                                                                         set the thermostat behavior

r,max_et+1,,,,on_val,off_val,arg3  ! Set the on/off set points and the heat flow                                                                              rate

ex,1000,1                                                 ! Dummy material number/material id

Step 5

Create the nodes I & J for COMBIN37, and then create an element COMBIN37 by assigning these nodes to it.

*GET,max_nd,NODE,0,num,maxd         ! Get highest node id in the model

n,max_nd+1                                                   ! Create I node of the COMBIN37 – location                                                                               doesn’t matter

n,max_nd+2                                                   ! Create J node of the COMBIN37

type,max_et+1                                               ! Set the type to a next higher unique id

real,max_et+1                                               ! Set the real to a next higher unique id

mat,1000                                                        ! Set the material attribute pointer

e,max_nd+1,max_nd+2,sensor_node  ! Create COMBIN37 element

Step 6

Select the nodes of the heat source and couple these nodes to the node “I” of the COMBIN37. This coupling will copy all the information on the node “I” to the nodes on the heat source.

cmsel,s,Heater_Strip                               ! Select the named selection containing the                                                                              heat source face

nsel,a,,,max_nd+1                                    ! Also select the node I of the COMBIN37

cp,1,temp,all                                               ! Couple the temp DOF of the node I of the                                                                                COMBIN37 to the heater strip (source face)

nsel,all                                                         ! Select all nodes in the model

Step 7

Exit the pre-processor and enter into the solution

fini                                                                ! Finish out of pre processor

/solu                                                             ! Enter into the solution

Inputs

Once I input the temperature limits, heat flow and other applicable boundary conditions, I solve the model and check the results if the desired output is obtained.

Modeling Thermostats: Screenshot showing the result of the APDL commands that appear as Input Arguments for temperature limits and heat sourcei input in ANSYS Workbench
APDL commands appear as Input Arguments in ANSYS Workbench

Results Verification – Part I

From the results, temperature extracted for the sensor node must look as shown in the below figure. One can observe that the temperature on this sensor node rises initially to 175.01°C and then gradually falls down to 170°C. As soon as the temperature falls below 170°C, the heat supply will be automatically switched ON and ultimately this leads to the temperature rise on the sensor node. This cycle continues until the final time step of the analysis.

Modeling Thermostats: Image showing geometry of the problem along with the history of temperature over a period of 10 minutes.
Temperature vs. Time plot

In order to check if the applied heat flow is considered exactly as per the desired definition, use the User Defined Result and set the Material IDs as 1000 (refer the command script in Step 4) & use the Expression as SMISC2 (second sequence of element summable miscellaneous data – look into ANSYS Help for more info).

See the below figure for the definition and output of this user defined result. Remember! This result is possible to see only from the ANSYS Release 18.0, however it is available as a beta feature.

Modeling Thermostats: Screenshot of ANSYS Workbench showing the way to select material id and result expression as SMISC2.
Obtaining User Defined Result in ANSYS Workbench

Modeling Thermostats: Plot showing the history of heat flow input over time
Plot of Heat Flow Input vs. Time

Results Verification – Part II

To verify whether the heat input and the desired output are obtained based on the input definition, I plotted a chart with the temperature result on sensor and user defined result (heat input). In the below figure, the purple colored line indicates the heat flow while the green colored line indicates temperature rise on sensor. You will see that Heat Flow is OFF when peak (175°C-region between red arrows) is reached and ON as soon as it reaches 170°C (region between green arrows) and is constantly supplying 2W of heat. Hence the thermostat works!

Note: Y axis in the below figure is of normalized magnitude. This doesn’t imply the heat input as 1W.

Modeling Thermostats: Plot overlaying history of temperature and heat flow inputs over time. Using illustrations, the portion where heat flow is switched off and on is also shown.
Temperature & Heat Flow Inputs vs. Time

Benefits

This article aims to target analysts with beginner experience with modeling thermostats. As such, without using COMBIN37 element, analysts can tend to struggle for the end result for weeks because they have to manually regulate the temperature by re-running the analysis.

Using the COMBIN37 element, the solution is much quicker, more sophisticated and automated. For this study, it took hardly two hours of run time with the automated technique while the manual approach took roughly about 60 hoursThe extent of time savings is enormous using this element in ANSYS.

Suggestions

In summary, I suggest that you define the number of sub steps in a manner that the minimum time step is sufficiently low. As per this, when the time step is significantly high, the temperature increase or decrease will be of significantly larger magnitudes. As a result, temperature will keep falling out of the range.

Time step also becomes important in order to accurately capture temperature values especially when the range is quite small. Such instances are common in many industrial applications. For example, for the problem above described, the range is 170-175°C, i.e. difference of 5 degrees of centigrade. Accordingly I chose a suitable time step for this example.

Screenshot showing settings for analysis for modeling thermostats
Settings for Analysis

Before solving the problem, ensure that you have entered the inputs for the argument values in the Solver Unit System irrespective of the current/working unit system.

Share this on:

3 Benefits of ANSYS SpaceClaim for 3D Printing

In this article, I will describe 3 benefits of ANSYS SpaceClaim Direct Modeler for 3D Printing and other applications. Specifically I will focus my attention on the Facet Tool in this article.

While searching for freely-available CAD models on GrabCAD.com, I chanced upon the challenges section because it piqued my interest. To my surprise, I found about 75% of the recent challenges to be related to topology optimization. For most of these challenges, lightweighting will yield a final design output that is optimum in weight. However such an output will be complex for traditional manufacturing processes. In the recent years, additive manufacturing or, often referred to as, 3D printing has appeared to be the manufacturing process of choice for several contemporary applications.

For topology optimization, ANSYS is the simulation tool of choice. In the latest Release 18, a significant thrust was provided to this topic. The technology is very powerful and highly-effective for lightweighting the designs. Typically, topology optimization results in the design in STL file format. In my experience, this design output is often fraught with poor facet quality and this requires cleanup by a competent tool.

Typical STL File Output of a Bracket after Topology Optimization towards 3D Printing
Typical STL File Output of a Bracket after Topology Optimization

The full suite of ANSYS Simulation Software offers not just solvers for multiple physics, but also several value added tools such as ANSYS SpaceClaim Direct Modeler (SCDM). This tool allows product companies to launch their offerings faster to market.

Now SCDM has several useful features that allow geometry manipulation and clean-up. Among many features, I found the Facet Tool to be extremely useful. After completion of topology optimization, the STL file output from ANSYS is imported into SCDM.  This Facet Tool helps in cleaning up the STL file output containing poor facet quality and helps me prepare the design for validation using ANSYS Mechanical.

For better understanding, I have included the typical workflow below.

Workflow for Topology Optimization for 3D Printing
Workflow for Topology Optimization

With this context in place, I will now introduce you to the 3 significant benefits of using ANSYS SpaceClaim Direct Modeler for 3D Printing applications.

HIPP Add-In for Reverse Engineering

HIPP is an SCDM add-in developed by ReverseEngineering.com. This tool is quite useful for engineers performing reverse engineering – with the eventual goal of producing the desired part using 3D Printing. For this case, the approach typically starts with scanning of the part desired for reverse engineering. The scan results in an STL file format created directly in SCDM; this automatic scan to STL is powered by the HIPP add-in. The Facet Tool in SCDM is then used to repair and prepare a watertight geometry.

Here’s an example of the scanned geometry of top profile of a piston rod that was generated in SCDM using the HIPP add-in. The facets in this geometry did not capture the profile accurately. Furthermore the geometry has undesired holes along with unwanted parts.

Image of a scanned geometry of a part in SCDM (using HIPP Add-In) for 3D Printing
Scanned geometry of a part in SCDM (using HIPP Add-In)

Using the Facet Tool, the repaired geometry is now ready for topology optimization and design validation before producing it using 3D Printing.

Image of the modified geometry in SCDM using Facet Tool for 3D Printing
Modified geometry in SCDM using Facet Tool

Save Resources – Faster to Market

There are numerous software tools for STL preparation, however SCDM Facet Tool has many value-adding, additional capabilities. With a very little investment, the Facet Tool provides a strong hold in combining multiple solid parts with faceted geometries in a user-friendly manner; this feature has several advantageous implications for 3D printing. Furthermore the tool is very easy and requires little knowledge for geometry repair and preparation. To prepare the bracket geometry (illustrated at the beginning of the article), it took me 10-15 minutes. See the below image. Now I found it to be fairly quick when compared to 2-3 times more using other facet modeling tools.

Image of bracket geometry modified after using SCDM Facet Tool for 3D Printing
Bracket geometry modified after using SCDM Facet Tool

Preventing Failures in 3D Printing

The Facet Tool has features to detect thickness and overhang problems before the model is sent for 3D Printing. Now these overhangs present a challenge to 3D printing without using support material. Problems such as these can be prevented by few techniques like tear-dropping, tapering among others. The effects of overhang cannot be judged immediately until you are a 3D Printing professional.

Facet Tool has a feature which detects the overhangs by providing parameters specific to 3D Printing. In particular, the thickness feature detects all geometry that is thinner than the minimum thickness specified by the printer OEM. In addition, I could understand thickness and overhangs-related problems beforehand by providing the direction of printing as well.

Other Applications

This topic is also of CADFEM’s particular interest because we invest into Digital Cities – a strategic initiative of CADFEM International that aims to simulate cities of our future. This topic is quite special and important since it involves studying the effects of disaster scenarios such as earthquake, tsunamis, pollution, crowd behavior among others.

virtualcitySYSTEMS, a CADFEM International group company, develops 3D city models using scanned data of terrains. For these city models, we use the Facet Tool to repair the geometry before performing urban simulations.

In future posts, I will delve further into using CFD and particle simulations for better modeling of 3D Printing applications.

Share this on: