A channel for MSC Nastran optimization and machine learning. Want to perform optimization or machine learning? Contact me at christian@ the-engineering-lab.com
Hello your videos are great. Thank you! Could you please make a video in PATRAN of how to model a bolted joint using (1) beam elements with RBE2s and (2) CBUSH and then compare the results of the bolt forces (Fx, Fy, Fz, Mx, My, Mz with coordinate definition) for both methods?
This video was the second take on Nastran coordinate systems. For those interested in a slightly different explanation of Nastran coordinate systems, the first take is available at this link: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-FIJI3YH3DKY.html
# Below is Python code that does the same procedure but faster import h5py import numpy import json class NumpyEncoder(json.JSONEncoder): def default(self, obj): if isinstance(obj, numpy.floating): return numpy.float64(obj).item() return json.JSONEncoder.default(self, obj) # Comments: # A list of datasets is available in: # 1. web.mscsoftware.com/doc/nastran/2018/release/DataType.html # 2. The nastran documentation directory, e.g. /msc/MSC_Nastran_Documentation/2021.4/doc/relnotes/v20214/DataType_v20214.html def write_dataset_to_csv_file(path_of_h5_file, dataset_name, name_of_csv_file): file = h5py.File(path_of_h5_file, 'r') # Recover the DOMAINS dataset and index it # The DOMAINS dataset contains information about the SUBCASE, TIME_FREQ_EIGR, etc. dataset_domains = file['/NASTRAN/RESULT/DOMAINS'] dataset_original_domains_in_list_form = dataset_domains[...].tolist() dataset_domains_index = ['dummy_element_a', 'dummy_element_b'] for line in dataset_original_domains_in_list_form: dataset_domains_index.insert(line[0], line) # Recover the dataset of interest dataset1 = file[dataset_name] dataset_original = dataset1[...].tolist() # Column names # Take the column names from the H5 file (type: tuple), convert to a python list (type: list), and # generate a string to add to the CSV file column_names_domains = dataset_domains.dtype.names column_names_domains = list(column_names_domains) column_names = dataset1.dtype.names column_names = list(column_names) name_of_last_column = column_names[len(column_names) - 1] column_names = ', '.join(column_names) column_names = column_names + ', ' + ', '.join(column_names_domains) # Determine if there are SUBCASEs (DOMAINS) to add attach_domains = False if name_of_last_column == 'DOMAIN_ID': attach_domains = True # Begin adding the data to the CSV file text_file = open(name_of_csv_file, 'w', encoding='utf8', errors='replace') text_file.write(column_names + ' ') for line in dataset_original: # The .tolist() is suppose to take any number that is of type 'numpy.float64' and convert it to a python type # 'float.' When reading dsoug7.H5 or dsoug10.h5, some issues were encountered. # After some research, the solution was to build a custom encoder such that if a type 'numpy.float64' sneaks in # the custom encoder, NumpyEncoder will manually convert it to a Python float type. outgoing_string = json.dumps(line, cls=NumpyEncoder) # If this dataset has corresponding DOMAINs (SUBCASE, TIME_FREQ_EIGR, etc.), then associate # the information if attach_domains is True: domain_id = line[len(line) - 1] # The DOMAIN_ID is in the last column of the dataset of interest line_in_domains = dataset_domains_index[domain_id] # Recover the corresponding DOMAIN from the indexed list dataset_domains_index outgoing_string_domain = json.dumps(line_in_domains, cls=NumpyEncoder) # Convert each number outgoing_string = outgoing_string + ',' + outgoing_string_domain # Create a line to add to the CSV file # Replace any brackets outgoing_string = str.replace(outgoing_string, ']', '') outgoing_string = str.replace(outgoing_string, '[', '') # Add a new line character to force a separate line outgoing_string = outgoing_string + ' ' # Add the line to the CSV file text_file.write(outgoing_string) # Close the file text_file.close() if __name__ == '__main__': write_dataset_to_csv_file('model.h5', '/NASTRAN/RESULT/ELEMENTAL/STRESS/ROD', 'file_1.csv') write_dataset_to_csv_file('model.h5', '/NASTRAN/RESULT/NODAL/DISPLACEMENT', 'file_2.csv')
Auto Model's Units are on inches AND with scale factor of 39.37 ????? THIRTY NINE POINT THIRTY SEVEN..........They can't be serious, what the actually (d)uck bro. Ultra modified McDonald's - Imperial system and kolokithia tubana as we say in my village.
Hello there ! I have a university project on nastran and i would be needing a mechanical automobile part such as a brake or spark plus etc, would you be kind enough to send me a file of a part on email that i can study, it would be really nice of you 🙏🏻
It was done for demonstration purposes. Basically, I wanted to show the option to cherry pick which errors are part of the objective or just constrained. You can certainly mark all the check boxes if you prefer.