oshi / oshi6

Repository for planning/coordination of OSHI 6.0
MIT License
0 stars 2 forks source link

Initial reference implementation #14

Open cilki opened 5 years ago

cilki commented 5 years ago

Far from 100% finished, but here's what I have so far. The most important files to look at are:

I implemented rudimentary fetching logic for Nic, Disk, and Firmware on Linux only (this stuff is just for demonstration and should be rewritten later).

Note about the definitions file format:

I decided to use JSON rather than YAML for the definitions file for several reasons. The content also evolved from our initial specification. For example, I moved permissions information out of the definitions file since that's more of an implementation detail. I'll update the specification in the README eventually.

Package Survey

gen/main/java (generated API)

Container

src/main/java (non-generated API, drivers, and examples)

Query Method

Driver

Extension Driver

Big Picture

The main idea behind the design is to make the query methods first-class citizens (since that's OSHI's business logic afterall). Query methods are organized into driver classes for each platform (or extension). For example, the DiskDriverLinux contains query methods for the basic attributes of a disk. An extension like DiskDriverLinuxSMART relies on the presence of the smartmontools package and provides additional attributes. Attributes can be queried with whatever granularity we choose because each query method can be responsible for more than one attribute.

When building a driver to attach to a container at runtime, the available query methods are organized into a stack. When the container needs to fetch an attribute, the driver invokes each query method in the stack until one succeeds. This is implemented with the MethodHandle API, so reflection overhead is minimal.

Although driver classes can contain anything (including closable resources), they must have at least one query method. Additionally:

Remaining Issues

There are at least a few unsolved issues remaining:

Anyway, I'm happy to make any changes, big or small :+1:. I'm also writing a paper with a more detailed explanation of everything that will be done by the middle of the month.

cilki commented 5 years ago

Why gen/main/java vs. src/gen/java or src/main/gen?

There doesn't seem to be a standard so I just wanted to keep the generated sources as far away from the non-generated stuff as possible. I'm used to using gen/main/java from working with Protobuf, but I'll change it to src/gen/java (which sounds better to me anyway). IDE support should be no different between the two. We just have to configure an additional source directory in Maven.

Status indicator on query method

I also like the idea of returning a status indicator enum. I'll make it so that query methods can either be void or return the status enum.

@RequiresRoot needs a bit more fine grain

I'm glad this situation arose because it highlights the flexibility of the drivers. To solve this elegantly, you just define two query methods for the same attribute (ProcessorID). One has @RequiresRoot and fetches from the standard data source and the other is an extension query method that requires cpuid. When the driver is initialized, ComponentDriver finds both methods and checks the requirements (whether we have root access and whether cpuid is installed). Depending on the situtation, both, one, or neither of the query methods will be called when the user queries for ProcessorID. Additionally, @Fallback can optionally configure which method has priority. This situtation is probably one of the biggest strengths of my design.

While @RequiresRoot should probably be renamed something more cross-platform, I think the usage is exactly what we want.

I'm not sure what you mean by closable resources, or why PDH Counters are an exception

I think we discussed in #9 that the only resource we should hold onto are PDH counters. This would be a closable resource because the driver needs to be "closed" in order to know when to release the counters. The design doesn't incorporate try-with-resources anywhere, so Java 9's Cleaner API is probably the only way to release those resources. After looking at the API more closely, I think our use case is what the API is intended to be used for.


OK so we still have the problem that the compatibility of these attributes on five platforms don't map onto an inheritance hierarchy perfectly. Specifying that an attribute is compatible with a platform works fine, but specifying that an attribute is incompatible with a platform forces that attribute to become platform-specific on all of the other platforms. This isn't ideal so I also experimented with a radically different type of API.

I call it the AttributeKey model because each attribute has a public static final enum-like object that the user references when retrieving/querying an attribute. AttributeKey is generic, so you don't have to worry about type safety, but the catch is that the usage looks like this:

disk.get(DiskAttribute.READ_BYTES); // Attribute compatible with every platform
disk.query(DiskAttribute.INODES); // Linux platform-specific attribute

This eliminates all of the platform-specific container classes because the container objects basically become wrappers for a Map. It solves some problems by relaxing the API, but creates some new problems and isn't as nice to use.

dbwiddis commented 5 years ago

I think we discussed in #9 that the only resource we should hold onto are PDH counters. Ah, I've since changed that during my API overhaul. I release all the PDH resources after each query. I measured the performance difference and it's negligible for periodic polling.

I even got rid of my cached CFStrings used for the macOS dictionary lookups. It's kind of a pain to recreate them (especially "constants") each query, but it avoids pointer leaks. So I really don't think I'm keeping any resources open anymore.

I like the attribute key query structure (it's similar to how I fetch results from the WMI queries using an enum) but I missed the connection between that and compatibility. Mostly, there will be things that are all-OS, or all-OS-except-one(usually Windows), or one-OS-only.

cilki commented 5 years ago

Here's what the container API looks like after we specify that the inode attribute (for example) is incompatible on Windows, but is compatible with basically everything else:

Now you have to find out what platform you're on and cast your Disk to DiskLinux (or whatever) to get the inode count. Attribute keys solves this because it's less safe during runtime. With attribute keys, you just do this:

disk.get(DiskAttribute.INODES);

If you're on one of the 4 platforms that supports inodes, then you're good.

I use the attribute key pattern in my other project to reference nodes of a hierarchical tree and it works pretty well, but I'm not convinced it's the best API for OSHI.

cilki commented 5 years ago

I think offering AttributeKey access in addition to the current API could work because users could then choose whether they want to be platform-safe (by calling getWindowsSystem(), etc) or platform-unsafe (by simply calling get with a platform-specific attribute key).

This adds some complexity, but I'll try to figure out if it would be worth it.