Closed SebGue closed 4 years ago
Hello, Thank you very much for your constructive criticism, I really appreciate.
I am going to try my best to answer, but it is definitely an ongoing work which needs to be improved.
1) About the restriction to ndarray: The restriction to elements of the same type (triangles for instance) allows to build faster algorithms to use and postproc the solutions (by using vectorized matricial formulation of the equations). This can be definitely improved if there are mixed types of elements. I am open to any proposition, but I would like to keep the idea of ndarray containing all elements of the same type, which is really handy to use in postprocessing. Maybe we could think about your idea of storing everything with an "oversized" ndarray (with element type attributes etc.) , with additionnal functions which return a Submesh containing only elements of a certain type. What do you think ?
2) About FEMMSolution class: I don't see any problem to do that, but it is important to keep a link between the mesh and the solution. At some point, the software could handle several mesh and several solution from different simulations. Maybe we should have a parent class "MeshSolution" with an attribute "Mesh" and an attribute "Solution".
3) About LUA scripting: It is much much more faster. It reduced the computation time to perform this operation from 5min to 10seconds in some of our cases. Actually, it's D. Meeker himself who gave us the trick.
Best regards,
Raphaël
Hello Raphaël,
I thought again about the oversized ndarray and actually I think it may be not the best idea, since it has a lot more memory consumption. In my opinion, a much more elegant way would be to use dicts.
mesh.nodes = np.array([ ..... ])
mesh.element = {"Triangle": np.array([ ... ]), "Quadrangle": np.array([ ... ])}
mesh.group = {"Triangle": np.array([ ... ]), "Quadrangle": np.array([ ... ])}
The resulting code will be easy to understand, easy to extend and only requires slight overhead for choosing the appropriate element type. (Oversized ndarrays would also require decoding element types by number and filtering.) Solution values could then be stored the same way.
What do you think about that?
Best regards, Sebastian
Hello Sebastian,
I think it is a good idea. However it requires to have "tags" (or "element number") which would be unique for any element accross the whole mesh. I think it would be a good idea to have tags as well for nodes.
Additionnally, it will be necessary to store the number of elements per type of element:
mesh.nb_element = {"All":total_nb: "Triangle": nb_triangle, "Quadrangle": nb_quad}
And of course to have new methods to add or remove new elements. In particular, I'd like to be able to define a submesh of "Segments" elements, each defined by 2 nodes.
If you want I can perform the modifications in the code, and perform a pull request.
Best Regards, Raphaël
Hello Raphaël,
I don't know if I have the time to implement the modifications short term. So I'm fine if you could do that. I'd rather do some smaller contributions, i.e. plot methods for example.
Best Regards, Sebastian
Hello to everyone interested in this issue. I put a reference here to pull request #50, since there was a further discussion with a solution about how to design the pyleecan default Mesh and Element class.
Best Regards, Sebastian
Hello Raphaël,
I've got some other things in mind to be discussed.
Best regards, Sebastian
Hello Sebastian,
1) My idea is to use objects' pointers, and to make everything pointing towards the same Mesh if nothing has moved (for example when using FEMM sliding band option). Then meshsolution[time_step] will store the same mesh without any additionnal memory cost, but the solution would be updated at each time step.
2) and 3) I have think about it as well. We will have to take into account everything at some points: nodes, edge face and volume solutions You proposition is compatible with that, so I agree we can make it work this way. However, I think it is not very efficient to redefine each node as a "Nodal" element in order to defined nodal field or nodal integrated quantities. Instead we could have two new classes as attributes in Mesh: SolutionNodal and SolutionElement.
Anyway, one could implement another Solution() inherited class with tags later. It just mean that the Mesh's methods get_nodal_solution or get_element_solution would have to be adapted.
Looking forward for your feedback !
Raphaël
Hello Raphaël,
Another thing, I had no idea yet, are higher order fea-simulations where there is no such thing as one elemental value, since the quantities vary with position.
By the way, do you implement the changes of the discussion on "ElementDicts" or should I do that?
Best regards, Sebastian
Hello Sebastian,
About the .json saving, you are completely right, I forgot about it. The straightforward idea is to have a boolean is_same_mesh as attribute of MeshSolution, and to store list of Mesh and Solution instead of list of MeshSolution. We can have adequate methods is_get_mesh(time_step): if is_same_mesh is True, then get_mesh return always the MeshSolution.mesh[0].
About high order finite element: it will require to define elements with additionnal inner nodes (6 nodes triangles for example), which is totally compatible with the current architecture :-)
We could define a convention following this example:
get_all_connectivity("Triangle6")
return six nodes triangles
get_all_connectivity("Triangle3")
return 3 nodes triangles
To perform the projection of the field at any point in the space, we will need to define shape functions. This can be easily implemented by creating a "Projection" class as attribute of "Element" containing all necessary parameters. We will see that in a later issue if necessary, but for me it is not a problem at all.
I will perform the modifications immediately because I need them for my phD work.
Best regards,
Raphaël
Hello Raphaël,
I finally starting to work on iron losses calcuation. So I saw you are actually working on Mesh and Solution class. Is there something I have to care of regarding your planed modifications. I guess the methods will stay the same, so I can use them as a valid interface.
Best regards, Sebastian
Hello Sebastian
Nice to have some news from you ! :-) I have used Pyleecan for my thesis work lately, and because it is not perfectly adapted right now for multiphysics simulations we are doing a lot of modifications lately on this topic right now. So it is not going to stay exactly the same.
I will certainly write a tutorial about MeshSolution related work.
1) About the Mesh object: We have found a nice package to visualize and perform some operation on meshes (https://github.com/pyvista/pyvista), so we have a new Mesh class called MeshVTK. The idea is to create a mesh.VTK file with meshio, then read it and plot it with pyvista. It is going to be much more efficient. For the moment I want to keep a mesh class similar to previously. So the Mesh class is going to be an abstract one, and the previous format is going into MeshMat. The main difference is to use the notation "Point" instead of "Node" and "Cell" instead of "Element".
2) About MeshSolution class: The MeshSolution class stores a list of Solution objects. It has nothing to do with time steps anymore. For example, out of FEMM there will be 3 items: B,H, mu. The time-dependency must be managed inside the Solution object. This was require to enable FFT easily (through the SciDataTool librairy for example). There is going to be 2 inherited objects: SolutionData (with SciDataTool) and SolutionMat (field stored as a matrix). These two solution requires to have the same mesh at each time step (so the sliding band must be activated). If someone needs to save the solution without sliding band, he will have to create a dedicated Solution (for the moment we don't really see any other use, that's why we give priority to solution using a unique mesh at every time step).
3) FEA Interpolation I still need to provide basic FEA interpolation, hopefully I will do it this week or next one. And finally I have some feature on the MeshMat object I still need to push.
If something is not clear, don't hesitate to ask questions. As I said, a clean tutorial is definitely required to explain the intricacies.
Raphaël
Hello Sebastian,
I have almost finished the MeshSolution part (point 1) and 2)). It is available here: https://github.com/EOMYS-Public/pyleecan.git
We still need to work on the "interface" and "set_submesh" methods (the goal is to rely on the powerful features of pyvista), and not all unittest are updated. You can try it with test_CEFC_002_save_mag and test_EM_SPMSM_FL_002.
Best Regards
Raphaël
Hello Raphaël,
I got a question :-) What will be the most efficient way to get a subset of a solution, i.e. filter the solution by required indices or groups. There are a lot of methods related to MeshSolution and maybe one of them can already do that, but I have not understood them completely yet. If there is not such a method, it could make some sense to introduce a filter method.
Best regards, Sebastian
Hello Sebastian,
I think it would be much more easy to handle groups from the MeshSolution object (you can filter the Mesh and corresponding Solution at the same time without navigating through parents object). The groups are defined by the attribute "group" of MeshSolution (which is a dict). We would like to defined the group keys as follow:
Now is the issue of accessing the groups:
Idea 1): Use the methods get_mesh and get_solution with an argument "group". The expected behaviour is to look into the attribute group (which is a dict of list containing indices of cells) to find the corresponding key. It return a new Mesh or Solution object.
Idea 2): Write a get_group method in MeshSolution. Return new Mesh AND corresponding Solution.
Idea 3): Let the user/developer find the solution for each case:
mesh = meshsolution.get_mesh(index=0)
cell_indices = meshsolution.group["my_group"]
connectivity = mesh.get_cell(cell_indices)
Then, the get_field method of the MeshSolution object must be called according to the type (cell or point) of the field. For example with nodal solution:
point_indices = numpy.unique(connectivity)
meshsolution.get_field(label="my_point_field", indices=point_indices )
As you see, the problem for the two first possibilities would be to "guess" the type of field.
What do you think ?
Raphaël
Hello Raphaël,
I'm currently working on the loss calculation and therefore have to deal with the MeshSolution module. As far as I understand the code of the get_cell method of MeshMat class, it may split the returned dict depending on the cell type. So e.g. the indices of cells['triangle'] and the input indices may not match anymore if there are other cell types. This is not an issue at all since there is the indice_dict return value. I see an issue if someone is accessing a certain cells type array and expecting it to have the same indices that were given as an input. Further get_vertices method don't pass the indices_dict yet.
The background of this 'question' is, that I added some code to the get_cell_area method. But if I want to further use this area to e.g. multiply it to the loss density, I have to adapt the area if there are multiple cell types whitin MeshSolution. The most easy solution I see, is to adapt get_vertices to pass indices_dict and rebuild input indices order in get_cell_area then.
But maybe you've got an other idea.
BG Sebastian
Hello Sebastian,
Great news ! Unfortunately I am not sure to perfectly understand what you mean by
I have to adapt the area if there are multiple cell types whitin MeshSolution.
But I don't see any problem to modify get_vertice in order to pass indice_dict instead of the indice(s). I can't think of any algorithm which should get the vertices without knowing the type of cell (and thus the dimension of the output).
If you want, I can make the modifications in the branch "Solution" of EOMYS-Public. I need to update a few methods of the MeshSolution module anyway (nothing huge, mainly CO and small BC).
Raphaël
Hello Raphaël,
I was not sure if the indices of a field solution will match the indices the returned values of get_vertices if there are multiple cell types. But maybe this is irrelevant anyway.
Another quick question: Is the shape of a field of meshsolution (i.e. meshsolution.solution.get_field().shape) restricted somehow, maybe regarding number and order of axes? I was trying to setup a one axis field and getting some errors due to indexing.
BR Sebastian
Hello Sebastian,
I am going to release an Interpolation object for MeshMat module. Maybe it will be more clear for you if you can think "object" for the calculation of cell area.
The get_field is working with a dict as input. Let's say you have stored B with shape (1200, 10, 2) (1200 cells, 10 time step, and 2 compnents x/y).
Then
get_field( {"indice":[0,1,2,3]} )
should return a ndarray of shape (4, 10, 2).
Is it more clear ? (I know, I should add more information in the documentation sorry).
Hello Raphaël,
yes some documentation would be good, since I maybe accessing data 'the wrong way'. :-) We should discuss that after my PR, that I'm preparing right now. Therefore I'm doing some code cleaning and had some issue with the fields axis. I think this is related to the 'squeezed' field of get_field method (i.e. I only got one time step). So is it needed to squeeze the field?
BR Sebastian
Great 👍 Waiting for your PR :-)
The idea is that get_field() (no argument) return the same shape as described by get_axis(). In the example I gave you before, the get_axis will return a dict such that:
{ "indice": 1200 , "time":10, "component:2" }
Even if there is only one time step, there shouldn't be any squeeze, because I want to be sure that the dimension from get_axis and the shape of the field from get_field are matching. But you can squeeze internally in your algorithm of course.
Btw, the data object from SciDataTool are including some FFT features (I don't know exactly what you need for loss calculation).
But there is a squeeze at line 59 of get_field method, at least at my current pyleecan version.
field = squeeze(solution.get_field(args=args))
My field has two axes, while time has only one value.
along_arg = []
for axis in sol.field.axes:
along_arg.append(axis.name)
>>> ['time', 'indice']
sol.get_field().shape
>>> (9890,)
So the field is squeezed.
I will keep my workaround for now, so you can test yourself.
BR Sebastian
I have updated the Losses branch on EOMYS-Public (no conflict with your work).
https://github.com/EOMYS-Public/pyleecan/commit/17ea8defe1efc9b9f2487287cfa3582c86cb785d
It should work now ;-)
Hello Raphaël,
I pulled your latest commits from 'Solution' branch and get the following error message. Is it a bug or am I using the get_group method the wrong way?
Best regards, Sebastian
Traceback (most recent call last):
File "...\dev.Losses\pyleecan\_test_skript_comp_loss.py", line 55, in <module>
grp =mshsol.get_group('stator')
File "...\dev.Losses\pyleecan\pyleecan\Methods\Mesh\MeshSolution\get_group.py", line 98, in get_group
field_sol = sol.get_field()
File "...\dev.Losses\pyleecan\pyleecan\Methods\Mesh\SolutionData\get_field.py", line 38, in get_field
if all_ax[i] == 1:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
(BTW: mshsol is of class 'MeshSolution', i.e. the solution of a FEMM simulation; shortened the 'File' in the error message)
... so far it seems it's an issue with SolutionData.get_field
Hello Sebastian ! Do you have a test (for instance _test_skript_comp_loss) that I can run to check the bug in debug mode ?
EDIT: I succeed to reproduce the error, there is a bug in get_axis. I am going to correct this ASAP, thank you for pointing this out.
Corrected with 15a5e688733d917384377dc62cfff09dac5a1992
Thanhk you. This fixed the issue.
Next questions or remarks respectively :-) It's not possible to get the order of the axes of a field, since get_axis returns a dict that is'nt ordered (at least for Python < 3.7). Also I thought I saw a method that returned the axis values but I can't find it now. Is there any?
Hello Sebastian,
1) I know, I already have some issues with this. I am going to replace the dict that is returned from get_axis by a list such as:
axis = [['time', 10], ['indice', 1299], ['component',2] ]
which must be ordered as the field returned by default from get_field().
2) What do you mean by the axis value ? Do you want to get the time vector for instance ? I don't think there was any getter, but it would a good idea to have a one.
Hello Raphaël,
as you already guessed, I meant the time vector for the 'time' axis or the indices vector for the 'indices' axis for instance. (Maybe I have seen it at SciDataTool's data.)
Best regards, Sebastian
As a temporary solution, you can use the following code to access the axis:
solutionvec.field.components["x"].axes[0]
solutiondat.field.axes[0]
What do you think about returning SciDataTool's axes instead of a list? Essentially it will also be a list but one can 'ask' for name, value, symbol, unit, ...
It could a good idea, but the initial get_axis method is very handy in order to initialize ndarray size. I could write a get_axis_list corresponding to the current method, and replace get_axis with your proposal. What do you think ?
Yes that would be fine. Only, if get_axis_list is for initializing ndarray size, it could be named get_axis_size with just the size as output?!
Me again... Then SolutionMat seems to be a little limited. So I have to extract all the axes values (exept indices) before I use get_group method, that convert fields to SolutionMat?! Edit: Not only limited, but incompatible with the other SolutionX classes.
(Don't worry, it is great to have a real feedback ;-) )
I did not have any problem before because I did not make any operation on time. We could improve get_group in order to return a chosen Solution object. Maybe with a "convert" method added to all Solution classes.
Edit: What do you mean by "incompatible" ?
Incompatible because the 'new' get_axis method won't work.
Then, let's add an axis attribute to SolutionMat. I was already thinking about doing it anyway.
... or at least change the existing one :-)
Actually I am trying to improve my loss calculation proposal. Therefore I would like to filter the meshsolution by groups first and then process the data. But that won't work, since then I can't utilize the SciDataTool transform methods. Also I will not be able to use the 'getter' methods and have to access data directly.
Do you have any idea if it will affect computation time alot, if I transform all the data first and filter by groups afterwards (vs. filter first and then transform)?
A solution could be to setup my 'own' SciDataTool data after getting the axes and fields?!
What do you think?
For the moment, the 2D calculation are quite light, so I don't think it will be very costly to filter afterwards. Nevertheless, setting you own data object is a possibility. Actually, we already do that most of the time in the code: see comp_force method, or build_meshsolution for examples.
Using SciDataTool objects will allow you to use the different plots btw.
Hello Raphaël,
I got some plot_contour issue. On my desktop the plot looks okay and on my laptop it is scattered (see below). What data (of plot_contour) do I have to compare to distinguish if it's pyvista or pyleecan issue? Data is MeshSolution class with solution if SolutionData class.
BTW: I think we should have file IO's that are more fault tolerant. I sometimes e.g. get an error regarding temp.vtk, since some of my files are on Dropbox synced folder and it's maybe a little slower than normal folders.
Hello Raphaël,
some of the Meshsolution methods have e.g. 'label' as an input. If the given label doesn't exists there is some fall back and the respective methods will still return some data. If this behavior isn't absolutely necessary, we should change that to avoid some potential issues.
Best regards, Sebastian
Hello Raphaël,
I tried to extract some point values from meshsolution. Therefore I wanted to use find_cell method of CellMat, but the is_inside method of Ref_Cell is missing so I get an error. Do you already have this method?
Best regards, Sebastian
Hello Sebastian, I don't have it for Triangle3 cells yet, but it should be quick to write. I can do it for the beginning of the next week.
If you could tell me what 'normal_t', 'a' and 'b' is, I will try to write the method since I need it.
(is_inside, a, b) = cells.interpolation.ref_cell.is_inside(vert, pt, normal_t)
I guess 'vert' are the vertices of the actual cell and 'pt' is the point to test if it's inside.
The idea is to get the equivalent point in the reference cell with get_ref_point, then to check if the this equivalent point is in the limit of the ref cell.
point_ref = self.get_ref_point(vertice, point)
s = point_ref[0]
t = point_ref[1]
a = s
b = t
c = 1 - s - t
is_inside = (a > -epsilon) & (b > -epsilon) & (c > -epsilon)
#Optional : Check that normals are almost aligned
if normal_t is not None:
normal_s = self.get_normal(vertice)
scal_st = np.dot(normal_t[0:2], normal_s)
is_colinear = abs(scal_st) > 1 - 2 * epsilon
is_inside = is_inside & is_colinear
return is_inside
This code should more or less do it ;-)
The normal is intended for 2D on 3D localisation. I think, it is completely useless for 2D only computation.
Thank you!
Hello Raphaël,
I found some bug with the find_cell method. Should I PR my fix to your Solution branch or to master? Also I was not able to use the MeshSolution.plot_contour() indices input argument, since it gave an error (this may also apply to get_field). I guess you are doing the next webinar at least the mesh part. Is there is already some preview of the corresponding notebook? Do you actively working on MeshSolution right now?
Further, should we close this issue, since it's already pretty long and somehow unspecific and instead open new issues in case?
Best regards, Sebastian
Hello Sebastian,
You can do as much PR as you want into the "Losses" branch of EOMYS-Public, until we completely validate the Losses module.
I have not modified the MeshSolution module recently. There are probably some undetected bugs, I hope to come back full time on this in November ! I am working on the Tutorial today ;-)
I agree to divide this issue into sub-issues. Let's make a list:
Do you see something I have missed ?
BR
Hello at all,
I want to start a disussion about a general FEA mesh & solution class here. A first realization of this feature (more or less related to FEMM specifics) was done with pull request #47. With the actual MeshMat class (which I think is meant to be a general class) it is only possible to store elements of one kind (i.e. triangles for example). I think this is very restrictive. Is there a reason not to expand the elements attribute ndarray to the max. number of nodes of the used or supported element types or to use a list to store the elements nodes. In addition this would require an element type attribute or to store the element type as the first value of each row within the elements attribute.
In the current implementation the FEMM solution is stored in a MeshFEMM object which is inherited from MeshMat. What do you think, if we seperate the mesh and the solution to some point, i.e. having a solution attribute in MeshMat. This attribue can then be of type FEASolution (or more general MeshSolution) with inherited FEMMSolution with nodal and elemental solution values.
At last, why are you interfacing FEMM with a LUA script instead of the python interface? Is it faster or more convienent for that purpose?
I'm looking forward for a constructive disussion. Best regards, Sebastian