SteveDoyle2 / pyNastran

A Python-based interface tool for Nastran's file formats
Other
384 stars 148 forks source link

Question : Management INCLUDE #438

Closed PabloBeranger closed 5 years ago

PabloBeranger commented 6 years ago

What would be very interesssant and wonderful, would be to be able to get back a BDF Instance of every INCLUDE. With it it would be very easy to make statistics by INCLUDE and modifications.

Example : model_FEM_GLOBAL.dat

        In the system section > INCLUDE_system_01.bdf
                              > INCLUDE_system_02.bdf
                              > ...

        In the excutive section > INCLUDE_excutive_01.bdf
                                > INCLUDE_excutive_02.bdf                      
                                > ...

        In the case control section > INCLUDE_casectrl_01.bdf
                                    > INCLUDE_casectrl_02.bdf 
                                    > ...

        In the bulk section > INCLUDE_bulk_01.bdf
                                > INCLUDE_bulk_01_aa.bdf
                                > ...
                            > INCLUDE_bulk_02.bdf  
                                > INCLUDE_bulk_02_aa.bdf
                                > ...

Then Instance_BDF_FEM_GLOBAL = BDF()

When you do the

Instance_BDF_FEM_GLOBAL.read_bdf( model_FEM_GLOBAL.dat)                                  

    > When you find an INCLUDE you create an instance BDF for the INCLUDE

      Instance_BDF_FEM_INCLUDE_Systeme_01 = BDF()
      Instance_BDF_FEM_INCLUDE_Systeme_01.read_bdf( INCLUDE_system_01.bdf)     

    > .....

Like this i can make :
  > statistics on the differente INCLUDE 
  > Bring modifications and rewrite the INCLUDE by pyNastran
  > .....

Do you things it will be possible ?

SteveDoyle2 commented 6 years ago

I'm trying to understand the use cases better. You mentioned one, but what others?

So what's currently done is you pack all the lines into a single deck. This loses all association with the line numbers and file that each individual line came from. It even technically lets you split a single card across multiple files. Then, you assume the user has a system section and read until you hit a SOL line. The you go find a CEND for the executive deck, and a BEGIN BULK for the case control deck. What's left is the bulk data lines.

For your include case, you need the line association to be stored as well. So for example, Lzine 50 came from line 10 in the file model.bdf and the section is the bulk data section. That doesn't tell you if that line is a comment or not and doesn't tell you if it's even used (e.g., after an ENDDATA). However, it let's you do the tree graph you showed.

If you instead wanted to know where each card was in the deck, you'd want to use a very different data structure (e.g., lines 20-30 in model2.bdf make the LOAD card). In that case, you might even get away without storing the lines.

For large BDFs that seems like a lot of extrs RAM to store the lines, not to mention it's slower, so as a BDF object no, but maybe another thing.

PabloBeranger commented 6 years ago

I did not understand everything about your answer.

I have a big FEM that is distributed in includes and includes calling by others include (tree structure).

I want to be able to edit Includes and rewrite them. For this I have a GLOGAL BDF instance of the total FEM (which takes a lot of time to load when there are no errors that occurs). Then I have a recursive function that reads all includes and create a BDF instance of the include.

For example, the include contains PROPERTIES whose MATERIAL is in another INCLUDE. I use the GLOBAL BDF instance to find the characteristics of the MATERIALS corresponding to those called by the PROPERTIES in the INCLUDE. I make the modifications that I want at the level of the BDF instance of the INCLUDE and I record the BDF instance INCLUDE.

How do you modify an INCLUDE in pyNastran and save the changes in the INCLUDE FILE ?

SteveDoyle2 commented 6 years ago

It's a data management problem, right? I don't think this is really an issue of not being able to read a file. I think the structure you have is causing problems. I've dealt with a configuration with 550 subcases across 120 files, about 200 includes, unicode, duplicate element & property ids, incomplete references (e.g., not all LOAD ids exist). It's not fun, but it can be done. Getting rid of duplicate ids is sort of the key.

For starters, I would name the files something like wing_bulk.inc or static_loads_case.inc. if you need to flip the names back at the end, that's easy. Then, I'd use that to read the sub-decks from self.active_filenames. I'd also use pickle to save/load that deck.

How to manage a case control deck would be tricky, but your structure of having a properties/materials, you know that deck has only properties or materials. For the bulk data decks, I'd probably just loop over all the data structures and add a filename attribute to each GRID, CQUAD4, etc. Then make your modifications in the global model, replace the original card, and write the deck

SteveDoyle2 commented 5 years ago

I've gotten to the point where this is now potentially possible. I still need to test the slowdown and can create new methods if need be.

I've got a bag of cards by file number and line number. I was thinking of attaching a file number to a card and then open a dictionary of files and write to that dictionary. At that point, you just need to map the file number (which maps to some path) to a name. Does that seem reasonable?

The interface is similar to:

model.read_bdf(bdf_filename, save_file_structure=True)
out_filenames = {}
for i, fname in model.active_filenames:
    out_filenames[fname] = 'out' + fname
model.write_bdfs(out_filenames)

or to just write 1 file:

out_filenames = {}
for i, fname in model.active_filenames:
    out_filenames[fname] = 'out' + fname
    break
# the key gets converted to an integer, so when we make the file, we make a file that goes to dev/null
model.write_bdfs(out_filenames)

The executive/case control decks will be written to the primary deck.

Note that line numbers are for future work and are not sequential unless you have no comments and only single lined cards. They're also not accurate when you have comments, but are good enough for establishing order.

SteveDoyle2 commented 5 years ago

There also seems to be an issue with SPOINTs/EPOINTs as well as rejects. I'm not quite sure how to handle those.

SteveDoyle2 commented 5 years ago

This is done now. You do not have to write all the includes; only the ones you specify output files will be written. However, you do need to specify the filename for each output file, which is annoying.

An example is:

model.read_bdf(bdf_filename, save_file_structure=True)
out_filenames = {}
for i, fname in enumerate(model.active_filenames):
    dirname = os.path.dirname(fname)
    basename = os.path.basename(fname)
    out_filenames[fname] = os.path.join(dirname, 'out_' + basename)
model.write_bdfs(out_filenames, relative_dirname='')

The final API is:

def write_bdfs(self, out_filenames, relative_dirname=None, encoding=None,
               size=8, is_double=False,
               enddata=None, close=True, is_windows=None):
    """
    Writes the BDF.

    Parameters
    ----------
    out_filename : varies; default=None
        str        - the name to call the output bdf
        file       - a file object
        StringIO() - a StringIO object
        None       - pops a dialog
    relative_dirname : str; default=None -> os.curdir
        A relative path to reference INCLUDEs.
        ''   : relative to the main bdf
        None : use the current directory
        path : absolute path
    encoding : str; default=None -> system specified encoding
        the unicode encoding
        latin1, and utf8 are generally good options
    size : int; {8, 16}
        the field size
    is_double : bool; default=False
        False : small field
        True : large field
    enddata : bool; default=None
        bool - enable/disable writing ENDDATA
        None - depends on input BDF
    close : bool; default=True
        should the output file be closed
    is_windows : bool; default=None
        True/False : Windows has a special format for writing INCLUDE
            files, so the format for a BDF that will run on Linux and
            Windows is different.
        None : Check the platform
    """