hercules-390 / hyperion

Hercules 390
Other
248 stars 67 forks source link

Archlvl.c is in limbo. What to do. #183

Open jphartmann opened 7 years ago

jphartmann commented 7 years ago

It looks as if Jan started to implement architecture levels independent of anything else in the universe. And then went AWOL.

I suspect that he wanted to be able to produce a particular model, no doubt so that he could test code against the various features, as they (in Danish idiom) came faster than a hailstorm of snapses.

So far in my testing I've only come across crypto being sensitive to this (ESAME different from Z).

Do anyone of you need this granularity? Any of you willing to implement it--lots of floating point instructions depend on features; it is non-trivial?

To implement that sort of granularity, the opcode table would need to be built dynamically, and a slew of instructions would have to test the actual list of features installed (see e.g., dyncrypt line 289).

What to do?

Juergen-Git commented 7 years ago

If it ain't broken, don't fix it... I came across archlvl.c too, when I was at activating the crypto instructions for S/370 mode, and, after having defined everything I thought being needed to be defined, still found the instructions were not working. I then just patched what needed to get them going, thus leaving that beast untouched as far as possible. Basically, as long as the settings aren't used anywhere (except in dyncrypt and two or three other locations, iirc) they also don't harm. And if someone wants to use them: So be it... that person will then hopefully know what (s)he is doing.

To summarize: I wouldn't change a thing at opcode handling, dynamic ARCHLVL processing, and the like (all too dangerous!), and leave the definition of "special instruction set combinations" to dynamic modules.

s390guy commented 7 years ago

ARCHLVL came about as the result of a number of related needs. When OS's, particularly Linux, started looking for the PFPO STFLE bit being set, there was no way to enable it. Initially it was a way to manage STFLE bits.

It then was extended to enabling/disabling Hercules specific features. It was an intended alternative to defining a new command every time a new feature needed run time management came along. At the time I think the plan was to obsolete some config commands, for example the DIAGNOSE X'008' controls in favor of archlvl commands. This of course never happened. The initial feature of the HOST RESOURCE ACCESS FACILITY uses archlvl. I know I wrote it. As a way to manage Hercules specific capabilities, archlvl does a better job. How complete might require a review.

What was never implemented was actually enabling or disabling a facility's instructions in concert with its related STFLE bit. That requires changes to the actual functions implementing the instructions and never happened. The alternative to those changes would be code that does the equivalent to the opcode tables like s370x does.

The problem with the opcode tables is they are restricted to three levels. Anything more than that will likely never be implemented due to the way the tables are built today. So, smaller gradations of capability have to be implemented outside of the opcode tables. archlvl gives that opportunity. archlvl provides the run-time equivalent of the features controlled by featxxx.h.

Users like the ability to determine what they want to run when they start up hercules. Conditional compilation, think featxxx.h, does not do that. Some thing like archlvl is needed or loaded modules are required for that. We have essentially two mechanisms to achieve similar effects. Internally archlvl is more granular and general. Loaded modules only effect instructions. archlvl can effect whatever it is coded to detect beyond instructions (for example, the addressing modes of s380). For that reason, it has some intrinsic value that has not really been leveraged.

Juergen opts for doing nothing. Leave things the way they are. I would like to see the need for all of the different forks to go away. Most of the variation is around instructions. Although s380 goes beyond that to increased addressing modes, now to 64-bit. To the extent that the capabilities sought by loaded modules are Hercules specific features, archlvl probably should have a role to play.

The role of featxxx.h starts to go away. Why? Because if you are going to enable an instruction for s370 or esa390 (and there are a large number of z instructions that are enabled for esa390) that was not previously there, it has to have a _370 or _390 suffixed function to execute it. That means the function has to be included in the build. Which further means, each featxxx.h starts to have more and more of the same features enabled. The direction this takes is three opcode tables enabled for many of the same instructions.

What to do? I too vote for keeping archlvl and loaded modules. But I vote for elimination of featxxx.h and the conditional assembly of features. Maybe not the question asked exactly, but what I think makes sense. In the short run any feature that only controls instructions. In the long run other features. One of the major problems of Hercules from a developers perspective is the obfuscation of the code created by the horrendous level of macros and conditional compilation. The more conditional compilation that is eliminated, the more understandable the code. A few may actually build a s370 only Hercules. But most simply allow the other archs to go along for the ride. The developers are certainly aware that turning on or off features in featxxx.h may have rather unpredictable results. That is rarely done. What is needed is the run-time configuration.

Maybe the piece missing in archlvl is the ability to identify enabling and disabling functions for the feature. That would allow the elimination of loaded modules for these purposes. Whether that is of value is unclear at the moment. If I had to vote for loaded modules vs. archlvl, I would vote for archlvl. It simply has more inherent power.

My thoughts for what they are worth.

Harold Grovesteen

jphartmann commented 7 years ago

I like your thinking, Harold; it gets us closer to real hardware.

So, the basic unit is a facility, which includes, in general, registers and instructions. A [viable] set of facilities is called a configuration. The set of all facilities forms a hierarchy. An instruction belongs to at least one facility; it may belong to several if it has been enhanced. Actual installed facilities are enumerated in the extended facilities list.

A number of predefined configurations exists; the active one is selected by archlvl.

Archaeologists can dig out the facilities of the ESA architecture; even the XA one (from the appropriate APXB).

The use of archlvl to turn on the bit for an unimplemented facility is a fool's dream; it should go away: you tell an operating system that it can do something and then it all explodes in your face when it turns out that you lied; you cannot treat an operating system like some people treat their girlfriends (m/f).

Configurations can be enhanced by enabling a facility that is implemented, but not enabled in the underlying configuration. To do this reliably we need a way to determine whether a facility is compatible with a configuration. E.g., 64-bit arithmetic (N instructions in APXB) is unlikely to be compatible with the 370 configuration (unless 370 has been enhanced by the hypothetical 64-bit facility, which does not drag XA I/O with it).

Is there any utility in turning off a facility? The only reason I can think of is to disable an facility that has not been implemented properly, while it is being repaired. Such action is perhaps best done back at the ranch.

We shall likely need to distinguish between a facility that implements problem state ony; and those that require operating system assistance; and which operating systems may be more pernickery about.

We could build a proper table to decode operation codes; this would require information we have squirreled away from multiple sources, notably Appendix B. The table would contain much more information than does the current opcode table, which contains the address of the instruction simulation function only.

The tool to build these tables is likely not going to be easily available on Windows.

This restructure can be done in parallel with the existing scheme of things; might also provide the vehicle for gradual transition from the current implementation of instructions to one that has a smaller cache footprint and thus likely faster execution.

ivan-w commented 7 years ago

On 1/8/2017 11:24 PM, John P. Hartmann wrote:

I like your thinking, Harold; it gets us closer to real hardware.

What do you mean by "real hardware"... Your computer running hercules is already real hardware !

--Ivan

s390guy commented 7 years ago

John, your suggestion of building proper configurations from facilities and an expanded opcode table are very desirable enhancements. This should help to move away from the plethora of conditional compilation states. Something I see as necessary.

Tools to build such tables from a data base will work on Windows and other machines if the implementation language is chosen to do so. My assembler has a database that already has more information than is contained in the hercules op code table. The assembler and its tools all run on Windows and others. ASMA is implemented in Python. Works great. There are multiple options, probably ends up being the one chosen by whoever elects to implement this scheme. It is definitely needed.

Without the ability to transition to these changes, they will not ever happen. So, that is an absolute requirement in my book.

The sense I am getting is that the archlvl concept needs to be more fully embraced, but its actual implementation might be outside of the main code. But the end results are usable by the main code. Ultimately it might free us of just the three modes controlled by the three supported architectures. Feels like the right direction to me.

jphartmann commented 7 years ago

We need a benchmark to be sure that performance does not suffer.

The aggregated test cases produced by make check is easily controlled and runs reasonably quickly. It also excludes the logger and most other paraphernalia. On the downside so far, no test enables translation, but that could be fixed.

Other suggestions are welcome. Please keep in mind that reproduceability is important. Running time perhaps not that much.

dasdman commented 7 years ago

On 01/09/2017 10:26 AM, s390guy wrote:

John, your suggestion of building proper configurations from facilities and an expanded opcode table are very desirable enhancements. This should help to move away from the plethora of conditional compilation states. Something I see as necessary.

Agreed.

The opcode tables in operation should be dynamically created, with the initial load created by the core architecture, and finished by the options selected. Generation levels could also be created by having a mechanism to select, which would pull in a predefined set of options.
jphartmann commented 7 years ago

I have created the locked issue 185 to distill the current thinking. Please append comments here.

jphartmann commented 7 years ago

I checked the use of computed gotos (aka label variables) with Fish a year ago or so and he indicated that Windows does not do it. So regrettably, it is out.

Anyhow, you are now far ahead of what I am trying to deal with now, namely the contents and manipulation of what clearly is a data base.

At the moment I'm trying to reconciliate the STFLE bits with the codes in Appendix B so that I can see what facility produces what instructions, but the mapping is not 1:1.

s390guy commented 7 years ago

I have been thinking about this for a long time. I believe what I sort of talked my way to as I wrote my response, which should have gone here, apologies, that the very first step we need to take is the ability to define from outside of Hercules the configuration in use. This means, when Hercules is started a hardware configuration gets loaded similar to what s370x does, but for the entire configuration. This would eliminate the three sets of table during run time. It does not eliminate the three sets of functions.

Until we get to external control of the instruction decode table, we will never migrate to a new CPU. That means the very first task is database creation and generating the loaded module that populates the table or tables if we need to keep all three for awhile. We need to look at the issue around ESA/390 / z mode changes.

If it helps, you might want to look at the SATK MSL files. Instructions are defined for the assembler by feature. All instructions of the feature are grouped together. And the cpu being targeted by the assembler is built from the list of features. There may of course be some errors. It was all manual looking at all of the PoOs. I might have done some of the work already.

jphartmann commented 7 years ago

I already looked at the MSL tables. There are some spelling differences and no definition of operand attributes. In the end it will likely boil down to whether I can get information out of Appendix B (around -8), the PoO started to be formatted in a way that completely thwarted all scraping tools; I shall have to see if they have got better in the meanwhile.

ghost commented 7 years ago

If it helps, you might want to look at the SATK MSL files. Instructions are defined for the assembler by feature. All instructions of the feature are grouped together. And the cpu being targeted by the assembler is built from the list of features. There may of course be some errors. It was all manual looking at all of the PoOs. I might have done some of the work already.

why completely reinvent the wheel and not take a look at how others have done it ?

I just looked at the s390-opc.txt in the gnu binutils package and the classification seems good and complete to me

and my 1/2 cent about the arch-level , why not follow the pattern used by the IBM compilers and classify accordingly with the appropriate extension to satisfy the hercules quirks

see the ARCH option of the cobol pl/1 and c compilers

cheers e

jphartmann commented 7 years ago

Or indeed HLASM, as I already referred to.

One of the best sources of information is the HLASM opcode table, but IBM keeps that locked down in Hursley.

ghost commented 7 years ago

On 11 Jan 2017, at 21:55, John P. Hartmann notifications@github.com wrote:

Or indeed HLASM, as I already referred to.

One of the best sources of information is the HLASM opcode table, but IBM keeps that locked down in Hursley.

the IBM opcode tables for the gnu binutils were contributed by IBM

Contributed by Martin Schwidefsky (schwidefsky@de.ibm.com).

the content IMO should be similar to the ones from HLASM

cheers e

jphartmann commented 7 years ago

It isn't the same. HLASM also describes the operands.

ghost commented 7 years ago

It isn't the same. HLASM also describes the operands.

well I got more curious and lurked around …

the s390-xxx.c stuff has all the info needed ,

and as told in s390-opc also the description of the operands

/ This file holds the S390 opcode table.
… ... This file also holds the operand table. All knowledge about inserting operands into instructions and vice-versa is kept in this file.
/

/ The operands table. The fields are bits, shift, insert, extract, flags. /

I know, I know … all the above stuff is for disassembling/printing an instruction but to collect info it should be good enough

anyway open systems and friends seem to suffer from macroitis

cheers e

srorso commented 7 years ago

One issue with s390-opc.c is that it is GPL v3.

Whatever includes it might need to be brand new and new and dual-licensed.

Best regards, Steve Orso

+1 610 217 7050

On Jan 11, 2017, at 4:29 PM, Enrico Sorichetti notifications@github.com wrote:

It isn't the same. HLASM also describes the operands.

well I got more curious and lurked around …

the s390-xxx.c stuff has all the info needed ,

and as told in s390-opc also the description of the operands

/ This file holds the S390 opcode table. … ... This file also holds the operand table. All knowledge about inserting operands into instructions and vice-versa is kept in this file. /

/ The operands table. The fields are bits, shift, insert, extract, flags. /

I know, I know … all the above stuff is for disassembling/printing an instruction but to collect info it should be good enough

anyway open systems and friends seem to suffer from macroitis

cheers e

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

ghost commented 7 years ago

On 11 Jan 2017, at 23:24, Stephen Orso notifications@github.com wrote:

One issue with s390-opc.c is that it is GPL v3.

Whatever includes it might need to be brand new and new and dual-licensed.

I never suggested to use it as is, just to look around for inspiration and ideas e

srorso commented 7 years ago

That makes sense.

I still think about how to deal with Q.

Best regards, Steve Orso

+1 610 217 7050

On Jan 11, 2017, at 5:25 PM, Enrico Sorichetti notifications@github.com wrote:

On 11 Jan 2017, at 23:24, Stephen Orso notifications@github.com wrote:

One issue with s390-opc.c is that it is GPL v3.

Whatever includes it might need to be brand new and new and dual-licensed.

I never suggested to use it as is, just to look around for inspiration and ideas e — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

jphartmann commented 7 years ago

You cannot deal with Q. Even the dead have a say there.

s390guy commented 7 years ago

John, it is a little unclear what you are trying to do. I thought you were trying to match instructions to features. That is documented in the MSL files. In particular the file dedicated to s390 has each feature its instructions and the date the feature became available. That information was derived from a review of every published version of the PoO for z right through the most recent.

Now you are wanting information about the operands. As for that, the MSL formats.msl file details source and instruction fields, that is operands. No it does not describe how each operand is used if that is where you are going.

If you want any of the information in those files massaged in some way, that is very doable. Just let me know. The tools already exist to do that. Even if it is just a foundation, it could reduce some of your work. There are over 1200 instructions total documented in the files. Anything that gives a leg up helps with the effort. Adding to them with the new information may be the way to go. The offer is out there.

If you want the fun of doing all of that research yourself, enjoy. As others have suggested, leverage what already exists where possible.