Closed jviquerat closed 4 years ago
Hmmm, yeah @jviquerat I am puzzled I can't find anything, the force calculation the way it is now should do the same as mine. Furthermore even the circle is symmetric, why would there be a difference in impulse in vertical direction? Well, it is not perfectly symmetric but nonetheless. I never could observe something like this in my simulations.
@2b-t The circle discretization is not exactly symmetric due to staircasing, and it reveals the effects of whatever mysterious error I have done somewhere in my implementation... After a careful check, in this particular case, the cylinder discretization is symmetric in x but not in y.
For reference, the correct values for the cylinder should be Cd = 5.58, Cl = 0.0107 (from Turek)
I will have a break from this for today and come back to it tomorrow ^^ Anyway, many thanks again for the time you spent checking my mistakes !
Fine, I will still take some time to look for it. I am really puzzled I can't find it. I will keep you up-to-date if I find something here.
@2b-t I think I got it ! When I create the self.boundary array, by construction I add multiple occurences of the same lattice positions, and I don't unique-sort them afterward. So in the drag-lift part, some contributions appear multiple times.
@jviquerat That would make sense! I was just counting the cells a couple of minutes ago and i estimated them to be around 40 per quarter circumference, which would be something like 160 cells. 190 boundary cells for an area of 300 inside seemed a bit much. The force calculation else looks fine. Let me know if that is it and commit the updated version then we can have a look at the outlet boundary condition and check why it crashes with BGK.
@2b-t I have a question regarding your thesis, p.50. When you multiply the computed momentum exchange by dx**2/dt, is it the "lbm" dx and dt (ie 1), or the physical one ?
\Delta x and \Delta dt in my thesis are always in lattice units. As you have probably see already some people do not use them in their derivation (set them to unity) while other don't. I tried to include them consistently as you can use them to check if the units are correctly. Everything is unitless just \Delta x and \Delta t give it units. I think of LBM as a fictional system with block of 1 m length and time steps of 1 s and a density of 1 kg/m^3, so dimensioned and not in dimensionless LBM units. I do that to emphasize that if you use LBM consistently that way the dimensionless numbers will be correct. Do not mix LBM and physical units: If you need real forces use the factor Cf, it basically scales the fictional LBM system with values for length, time and density equal to unity to a system of more reasonable SI units by the laws of similarity. As you can see therefore also my conversion factors have no units. I think doing so consistently is also very useful if you use block-structured meshes (I have never done so but nonetheless). Some authors simply apply a different time step for the larger cells. Thus, everything scales different on the grids depending on with time and length step they use.
Okay for the conversion, it was just to be certain ^^ I am running new tests on the cylinder now
Hey @2b-t !
Things are coming close to good, although convergence is not perfect. The values I get for drag are pretty correct (drag should be 5.58, lift 0.0107), although when I refine the convergence is not very clear, especially for lift. Have you had similar experiences ? I would say this could be due to the integration on the staircase approximation. (The figures in the legend correspond to the number of cells in the y direction)
@jviquerat I think so as well. Actually I never really did mesh dependence studies with forces. Forces were just an extra I was interested in and another option to check if the code works fine. Neglecting the unphysical phase in the beginning I think the drag seems to converge (right side of the graph). The lift in my opinion should be taken with care if the values are low such as in your case. The staircase discretisation might result in errors as the form is generally not completely symmetric and introduces some sort of roughness. Furthermore in particular lift is influence a lot by the nearby walls (ground effect if the shape is not perfectly centered and closer to one wall than the other). I am not sure what form precisely you simulated over there, it should be a slightly unsymmetric shape but I would try to check what the influence is if the wall is further away or use the identical set-up to the benchmark data. You will find a significant range of values in the benchmark data for most problems as well depending on the reference method, time step and precise geometry.
Furthermore there is a second discretisation error, the time step that is basically set with the characteristic velocity. I think you kept it the same in this context but to complicate things even further additionally to the discretisation errors in space and time there is also an error in compressibility due to the pseudo-incompressibility of LBM. As you have seen LBM is able to preserve shocks. This is the case as the equations are actually never completely incompressible but if you let the characteristic velocity go to zero and you guarantee that the over-speeds across the domain are small, it approaches the incompressible solution. This compressibility error couples the spatial and temporal discretisation error. Thus, for a consistent second-order divergence you should adapt diffusive scaling where you adjust the characteristic velocity in such a way that also the compressibility error is consistently reduced (appendix E5 page 151 of my Master's thesis). Not respecting this coupling might be a cause for any weird convergence behaviour.
The case I simulate is from a Turek paper, and is a reference case to validate CFD codes (there are many "Turek" benchmarks, mine is the simplest) http://www.featflow.de/en/benchmarks/cfdbenchmarking/flow/dfg_benchmark1_re20.html It is indeed, asymmetric, as the cylinder is slightly off from the channel center.
Yes, I followed the idea I saw who-knows-where, which is to systematically impose u_lb = 0.03, choose Re_lb and L_lb, and to deduce tau_lb and nu_lb from it. If got things well, doing things this way, either I'm accurate (regarding the compressibility error) or I'm unstable. If I'm unstable, I increase L_lbm. Is that what you meant ?
I will give a look at the integration method on curved boundaries described in Kruger's book. As a matter of fact, in the application I'm considering, I always have a very accurate description of the obstacle boundary, and from what I understood so far, it makes it easier to compute forces on curved boundaries. I will let you know ;)
Hmmm, ok, I am not familiar with that benchmark. I am no fan of benchmarks that put the cylinder so close to the inlet as there is a stagnation bubble that forms in front of the cylinder and it is pretty big. If you put the inlet that close it does not form correctly. It is hard to estimate what the consequences of it are but I personally prefer data with a cylinder placed further away and walls further apart. I will have a look at it later today or tomorrow.
I am not sure if I understood the way you set the parameters correctly but what I described is explained in "2.2.7 Errors and accuracy" (pages 37-38) and "Diffusive scaling" (appendix E5, pages 150-151) of my thesis. There are three error sources: The scheme is second order accurate in space and time but has a third error, the compressibility error that scales as $\Delta t^2 / \Delta x^2$, as the method is not truly incompressible. As a result for a consistent second-order convergence you have to respect $\Delta t \propto \Delta x^2$ (see 2.2.7 to understand why). This actually means if you want a consistent convergence and double the resolution you have not only to halve the time step by halving the characteristic velocity but instead reduce it by a factor of 4! So if you do the first simulation with N=50 and U=0.03 for a consistence convergence the refined grid should use N=50 2=100 and U=0.03/2^2=0.0075. This leads to a consistent second-order convergence but the increases the computational burden for every refinement step by a factor of 2 2^2=8! This technique is called diffusive scaling. I think at least for testing convergence you should use it.
Do so and let me know, curved boundaries are entirely new for me. I have never used them as in my applications, porous media this was never really feasible and I expected it to more than halve my performance! I only had 6 and 12 core computers and no clusters at hand, so I tried to max out the performance of the computer. The turbulent computation with the tracer I sent you was actually 90h of simulation time even though I only know of a single implementation that is faster than mine: It requires 30 million of cells for an accurate resultion, tiny characteristic velocities due to over-speed of 20 in the porous media and as the flow is turbulent, two populations and finally it is 3D). With a potential performance drop of two the simulation times would have been too long for this setup. :(
@2b-t Oh ok, I was not really aware of that problem. I will soon be running out of time to work on this code, so I will focus on getting the right drag and lift values, which are the final goals for me here ;) Indeed, here the lift to compute is very small, so no wonder it is hard to get. I am not enough into CFD to have any point of view on the stagnation bubble, but I trust you on this ^^ I just know that we regularly compare to these benchmarks for our FEM computations. I have given a try at numba this morning, without getting exceptional results (on top of that, using @jitclass is pretty cumbersome to get to work compared to basic @jit and @njit). Do you have experience with Numba or similar libraries ? My other alternative is to simply parallellize with MPI, which is closer to what I usually do for a living :)
@jviquerat I can totally imagine. It took us a while now to get this working correctly now and your first post on Physics Stackexchange is already quite old, so I imagine you have been working for it on a while. I went through the code again. The boundary conditions look almost fine now, I transformed my lattice onto yours and checked the right pressure boundary. There is still a tiny mistake in your formulas that you have added recently, there is no need to multiply $c_s$, the $c$ in my formula is $c=\Delta x / \Delta t=1$ and not the lattice-speed of sound $c_s$! (You have not defined c in routine zou_he_right_wall_velocity btw.) I have always used bounce-back on the corners as well so I can't guarantee you that the implementation is correct but from what I can tell from Krüger that is the way I would implement it as well.
As far as Numba goes. I have used it a couple of times as I come from C++ as well and I prefer a C-like loop structure over some mysterious Numpy routines. Most of the time adding "@jit(nopython = True, parallel = True, cache = True)" above every routine that I needed to parallelise was sufficient. You have to recheck your results though after doing so. It might happen that the program ignores some implicit race condition or messes up a data type for some variable and ends up with wrong results.
I mean looks all not too bad. Ran a few simulations and seems fine. I would finally use the jet color scheme such as in the benchmark simulation paper and compare the flow field visually. From a coding side I am pretty confident that everything runs as it should. If the forces are fine and the flow field (dead water behind the cylinder and the stagnation bubble in front of it) look pretty similar I'd assume that everything works correctly. If not I would first check the definition of mean velocity, flow profile and Reynolds number.
One thing regarding performance. Performance might be bad because of your indexing: (D,X,Y) in Numpy should result in a bad cache coherence as the values similar to C++ are stored in row-order fashion. In order to support the internal architecture of the processor (cache pre-fetching) it is significantly faster to use a (Y,X,D) layout. This would require to rearrange the indices accordingly throughout the code! Krüger has benchmarked different layouts in section 13.3.2, in particular see the grey box on page 572-573: Your performance might increase by a factor of 10+.
Else I have no fast fix apart from Numba for the performance in Python. I know how to write optimised C++ (assembly, AVX2/AVX512 vector intrinsics, OpenMP, MPI, CUDA), so I only use Python for writing prototype codes and use C++ for performance later if necessary.
Indeed, I did not pay too much attention about that when writing it. I will spend some time re-arranging the indices and see how much I can gain :)
Good luck, let me know then what the difference is! :)
Hey @2b-t ! Some news :
I am wondering: What is actually your goal with that code? Do you want to use it for training neural networks as in some of your previous publications? Basically give it random shapes where you can apply automatic meshing so that you can create a lot of training data (flow field and drag/lift) for it automatically?
- Hmmm, ok, would not have expected that. So you reordered the order of the loops as well? Meaning loop y(loop x(loop d(....))) to support a (y,x,d)-layout?
Yes. I went through most of my routines, and most of them look like :
loop on q:
operate on array[q,:,:]
So to me, it seems consistent with the C array contiguity.
- The most important thing of course is apply Numba to the collision and streaming routine. Combining both routines into one should also speed up by a factor of 2 as you don't have to load values twice but then you would have to change the indices of the force calculation (so it can go in the end).
Yes, that was on the things to check afterward. I saw several options for combining equilibrium/collision or collision/streaming, I have to give it a go.
- Yeah, Numba does not recognize a lot of Numpy routines, they are not implemented (yet) but from my experience if you rewrite it without them the speed-up is significant. The other option would be going the other way and eliminate the loops in order use a Numpy-like structure. This is though a lot of work. Don't know, I honestly hate optimising Python, it is so intransparent about what it does as compared to C++ where you can ask for reports about everything and have a look at the assembly code and the effects of different flags (even online with e.g. Godbolt)
Agreed.
- I will have a look at the IBB out of curiosity! Sounds really interesting. I am really wondering what the effect in term of accuracy on the drag and lift is. Yeah, the formula has a wrong sign, it should be just like in my thesis, it is already listed in the erratum https://github.com/lbm-principles-practice/errata/blob/master/errata.pdf.
:)
- I am wondering: What is actually your goal with that code? Do you want to use it for training neural networks as in some of your previous publications? Basically give it random shapes where you can apply automatic meshing so that you can create a lot of training data (flow field and drag/lift) for it automatically?
That is a pretty good question, and the answer is that it is a consequence of a lot things ^^
My original field is discontinuous FEM for nano-optics computations, and I spent 6.5 years working on it (PhD + some time as an engineer, where I wrote a pretty big code from scratch, that is now used by the whole team I was in). I was working at INRIA (French national research center, so public funds), on a fixed-term contract. In France, you have a maximum time you can stay on public-paid fixed-term contracts. So, 6 months before the end, I started to look for something else, in the area I live in (my wife works at the hospital as an intern, and at the time could not expect to change position before 3.5 years, so I had to work in the area).
I found a position in another research center, on a private fixed-term position, with possible permanent contract after some time (that's where I work now). The original job was "serious rewriting/refactoring of a huge CFD c++ code, but by the time I signed the contract it ended up to be "machine learning for CFD problems". Needless to say I had very limited knowledge in CFD, and no experience in machine learning.
So for a year or so, I taught myself machine learning, as there were several expectations on this topic in the group. I have worked on moderately-interesting supervised learning things, mostly using the in-house CFD solver to generate data. To me, the most interesting topic is Deep Reinforcement Learning. Long story short, it can be used for non-parametric optimal control, in almost anything, including CFD (I am working a lot on this at the moment). It can also be degenerated to be used as an evolutionary-strategy-like optimizer, but with some interesting differences to ES (such as: there is no starting point, and you can re-use previous successful optimisations to speed-up future ones). I have a first submitted publication on this : https://arxiv.org/abs/1908.09885, although it has evolved a lot since this arxiv paper. I have good hope for this method to be efficient, and I am pushing several projects on this topic.
And now I can come to the LBM code ! The in-house CFD code, although efficient, is an ugly monster that I cannot share with anyone because it is sold as an industrial solver. Fenics is nice-ish, but it is slow as f*ck, and I ended up having to run it into a docker container (which I hate) to be able to work with students, collaborators... it was a big mess and I hated that. As a matter of fact, I also like to take time to look at things that are not directly related to my work, as most often spending time on those things brings me ideas, different point of views, etc... I have several small projects that explore things such as RBF networks, level-sets descriptions... that I may never use directly, but contribute to a general scientific culture. In the past, a project of this kind ended up with a paper on the fitting of experimental dispersion laws for several materials (https://www.spiedigitallibrary.org/journals/Journal-of-Nanophotonics/volume-12/issue-3/036014/Fitting-experimental-dispersion-data-with-a-simulated-annealing-method-for/10.1117/1.JNP.12.036014.short?SSO=1).
I started the LBM code with the idea that (i) it would be useful as a small solver to use in my current applications (all I need is accurate drag and lift on 2D arbitrary shapes at small Reynolds, from 10 to 500 max), and I could share it easily, and (ii) it would contribute to my culture. This time however, it took me much longer to achieve than I initially thought ^^
Now you know everything !
I just pushed the IBB on a separate branch, if you want to look at it
Yeah, in Austria there is a similar rule in place as well, so either the institution offers you a long-term position such as tenure track at university or you have to at least switch to another official institution....
Must have been overwhelming the last year or so then... I think both topics (CFD and machine learning) are some of the hardest to master in the respective fields as already the basics are quite involved and complex.
I had a brief look at your paper already some day ago and I think such combinations are very promising. I remember the first time I saw machine learning in fluid mechanics was some 3 years ago in a video of Two Minute Papers https://www.youtube.com/watch?v=iOWamCtnwTc. Until then I thought fluid flow would be too complex with all its regimes. I started thinking about it and some of my very old and experienced professors could estimate the flow field and values for drag, lift, pressure and stuff like that quite accurately and the more I put time I noticed that I was able too. You can look at simulations and most of the time know pretty fast what might be wrong just from intuition as you already have the approximate flow field in mind. I think a machine learning algorithm can do so too and even better than we humans.
Sounds all really nice, like a dream job for a researcher if an employer gives you that much flexibility to work this way. I think it is very important for a good researcher to not only know his/her current topic but even more so to have a broad knowledge in general in particular about related disciplines. I have met quite a few researchers that are either stuck in a single topic (they did MSc, PhD and maybe even habilitation) and are so proud of what they do and cocky, they can't appreciate the effort and discipline of other researchers/disciplines or others that just try to ride the wave and switch topics that are currently trending without ever obtaining profound knowledge of any field. Latter just try to publish without some sort of fascination or being convinced of its usefulness but instead are just looking for another publication to write on their CV, completely career-driven. Then they review other people's work only to ask them to cite their own research. On the other hand in particular through my two older siblings, both researchers, I have met very diligent professors and researchers, that were naturally curious and had this humble way of explaining you things without any sort of arrogance. One of them is well over 90, professor emeritus, but still goes to office every day including the weekend (only on Sundays he goes to his office a bit later that usual in the afternoon) and when he talks about his research he still smiles like a small kid when it listens to an old grandpas stories, completely taken. I think people that have an attitude towards knowledge like that are truly admirable and one should strive thereafter.
I think it is very important to program something like this at least once by yourself as you may think you understand it by looking at formulas but actually think true understanding always involves a practical part. You have definitely learned a lot in the process! :)
Looks all correct to me now. :) I think the original paper also has a version with quadratic interpolation that I think only requires an extra term. If you are not fully satisfied with the final accuracy you might also give that one a go, should be a matter of 10 minutes to implement it. I am really wondering if IBB makes then a difference in the results in particular for drag and lift. Please let me know. :)
Must have been overwhelming the last year or so then... I think both topics (CFD and machine learning) are some of the hardest to master in the respective fields as already the basics are quite involved and complex.
Supervised learning, well it's actually not that technical, in the end it relies on very few things wrapped with a lot of cookbook recipes "that work". There is a lot to learn and to do, of course, but I find it way less technical than CFD, for example. Deep reinforcement learning is a bit more complex to me.
I had a brief look at your paper already some day ago and I think such combinations are very promising. I remember the first time I saw machine learning in fluid mechanics was some 3 years ago in a video of Two Minute Papers https://www.youtube.com/watch?v=iOWamCtnwTc. Until then I thought fluid flow would be too complex with all its regimes. I started thinking about it and some of my very old and experienced professors could estimate the flow field and values for drag, lift, pressure and stuff like that quite accurately and the more I put time I noticed that I was able too. You can look at simulations and most of the time know pretty fast what might be wrong just from intuition as you already have the approximate flow field in mind. I think a machine learning algorithm can do so too and even better than we humans.
Yeah, that paper got a lot of attention. There is a lot of good things in it, but one must always remember the amount of data that had to be created to train it.
What I dislike in most supervised learning applications (including what I do !), is that you make 10.000 simulations of, say, water splashing on a surface, and then you train your network, and get a tool that can simulate water splashing on a surface, period. I have a lot of hope in networks that are trained to execute "smaller tasks", and then combined together to make a neural-network based simulation tool. There are some out there, but it has been quite some time since I checked on how well they work.
Sounds all really nice, like a dream job for a researcher if an employer gives you that much flexibility to work this way. I think it is very important for a good researcher to not only know his/her current topic but even more so to have a broad knowledge in general in particular about related disciplines. I have met quite a few researchers that are either stuck in a single topic (they did MSc, PhD and maybe even habilitation) and are so proud of what they do and cocky, they can't appreciate the effort and discipline of other researchers/disciplines or others that just try to ride the wave and switch topics that are currently trending without ever obtaining profound knowledge of any field. Latter just try to publish without some sort of fascination or being convinced of its usefulness but instead are just looking for another publication to write on their CV, completely career-driven. Then they review other people's work only to ask them to cite their own research. On the other hand in particular through my two older siblings, both researchers, I have met very diligent professors and researchers, that were naturally curious and had this humble way of explaining you things without any sort of arrogance. One of them is well over 90, professor emeritus, but still goes to office every day including the weekend (only on Sundays he goes to his office a bit later that usual in the afternoon) and when he talks about his research he still smiles like a small kid when it listens to an old grandpas stories, completely taken. I think people that have an attitude towards knowledge like that are truly admirable and one should strive thereafter.
Yeah, I believe it is beneficial, although it is sometimes hard to make people understand that it is ^^
I think it is very important to program something like this at least once by yourself as you may think you understand it by looking at formulas but actually think true understanding always involves a practical part. You have definitely learned a lot in the process! :)
Indeed, and much of it thanks to you !
Hmmm, yeah, I agree, most computer vision and machine learning stuff I have encountered seems very empiric but nonetheless an overwhelming amount of experience is required. Computational fluid dynamics on the other hand is indeed very technical. There is a solid base behind it already but as mechanics of fluids can be very different from microfluidics and non-Newtonian fluids such as blood over turbulent aerodynamics dominated by inertia and (highly) compressible flows in turbo-machinery and around supersonic airplanes to very dilute flows around space-shuttles during re-entry: Every problem has its own theoretical background and computational methods. In particular the theories of turbulence and chaotic motion are very involved and have kept busy several remarkable mathematicians such as Henri Poincaré and Andrey Kolmogorov. Funny enough particular in that field you can see a lot of similarities to multi-body mechanics. This makes in my opinion the kinetic theory of gases and the Boltzmann equation very interesting - the theory behind it is really impressive. The incompressible Boltzmann method in its current state for me is some sort of limited gimmick: It helps you simulate transient flows in a fast way but the current discretisations take away most of the underlying theoretical power, that is also hold for compressible thermal gases.
Hmmm, I have only attended a few lectures on the topic and never trained any complex machine learning algorithm by myself but some friends that do their Phds in the field have already emphasized the amount of data that is required and that the remarkable thing about the Google fluids papers is mainly the amount of training data and the computational power behind it. I think this gap in computational power in this fields prevents good researchers from excelling: Some companies like Google and Facebook have the computational power and financial means to do so while even most world-class universities have not.
I am glad to hear that! :) Let me then know what the difference in drag and lift is with IBB I am really looking forward to see what the impact might be.
Hmmm, yeah, I agree, most computer vision and machine learning stuff I have encountered seems very empiric but nonetheless an overwhelming amount of experience is required. Computational fluid dynamics on the other hand is indeed very technical.
Yes, once gone through the technical part, remains a large part of "cooking". People who have been in the field for some time will have good intuitions regarding architecture questions, but building that intuition is very hard. Fortunately, you can start from existing reference architectures.
Hmmm, I have only attended a few lectures on the topic and never trained any complex machine learning algorithm by myself but some friends that do their Phds in the field have already emphasized the amount of data that is required and that the remarkable thing about the Google fluids papers is mainly the amount of training data and the computational power behind it. I think this gap in computational power in this fields prevents good researchers from excelling: Some companies like Google and Facebook have the computational power and financial means to do so while even most world-class universities have not.
What you say here is very true : many people in the field of ML think the research would not be that advanced without the power of the GAFA. Wether or not the field of (supervised) learning keeps progressing at its current pace remains a big question, though.
Yes, once gone through the technical part, remains a large part of "cooking". People who have been in the field for some time will have good intuitions regarding architecture questions, but building that intuition is very hard. Fortunately, you can start from existing reference architectures.
Potentially one could train a neural network for choosing an appropriate architecture as well. :)
What` you say here is very true : many people in the field of ML think the research would not be that advanced without the power of the GAFA. Wether or not the field of (supervised) learning keeps progressing at its current pace remains a big question, though.
That's what one of our professor in that field (H-index 85) emphasised as well. He has seen the advent of a lot of different approaches and architectures over the time but in the end they all boom and in the beginning everybody thinks there is no limitation to what they can do but at some point they reach their limits. He named the choice of architecture, amount of training data, computational requirements and missing transparency as some of the drawbacks of deep learning and further emphasised that a lot of improvement in the field was mainly based on the explosion of massively parallel, power efficient but yet affordable architectures and computing power (in particular graphics accelerators) in the last couple of years that he could have never dreamed of back in the 80s. Then he lost himself in some anecdotes of computing power back in the days.
Potentially one could train a neural network for choosing an appropriate architecture as well. :)
That was actually a project for people from Google. I can't find their related publications at the moment, but from what I remember (from one year ago), it was extreme in the required resources, and only them could manage such things.
That's what one of our professor in that field (H-index 85) emphasised as well. He has seen the advent of a lot of different approaches and architectures over the time but in the end they all boom and in the beginning everybody thinks there is no limitation to what they can do but at some point they reach their limits. He named the choice of architecture, amount of training data, computational requirements and missing transparency as some of the drawbacks of deep learning and further emphasised that a lot of improvement in the field was mainly based on the explosion of massively parallel, power efficient but yet affordable architectures and computing power (in particular graphics accelerators) in the last couple of years that he could have never dreamed of back in the 80s. Then he lost himself in some anecdotes of computing power back in the days.
Typical behaviour of good professors, loosing themselves in their memories :p
Hi again @2b-t,
Something I noticed regarding the drag-lift computation, is that my problem seems to be (partially) related to the discretization. Let's consider the same configuration for a cylinder and a square of same lateral size :
Disregarding the normalization/scaling, which is probably wrong : For the cylinder, final drag and lift = 12.692730642588893 -133.4031935237444 For the square, final drag and lift = 15.570420109494938 -0.06360486760453934
So the absence of symmetry in the application of the MEM yields large errors here, especially in the lift. If I refine the cylinder, I get significant differences in the final drag and lift. There must be a blatant coding mistake somewhere, but I can't locate it.