Closed nashif closed 7 years ago
by Javier B Perez:
Option #1: List all the supported features per board. https://gerrit.zephyrproject.org/r/#/c/5620/
by Javier B Perez:
Option #2: Create a matrix with board/feature. https://gerrit.zephyrproject.org/r/#/c/5621/
by Javier B Perez:
Option #3: Read the arch/
Example: arch/x86/soc/intel_quark/quark_d2000/Kconfig.defconfig.quark_d2000
...
if WATCHDOG
config WDT_QMSI
def_bool y
config WDT_0_IRQ_PRI
default 0
endif # WATCHDOG
if RTC
config RTC_QMSI
def_bool y
config RTC_0_IRQ_PRI
default 0
endif # RTC
if GPIO
...
We will see that the SoC has: WATCHDOG, RTC and GPIO
by Anas Nashif:
Any information we add to support this should not be part of the sanitychecks, the script is just a runner and needs to get the information from somewhere else, so options #1 and #2 go away.
Option 3 is not scalable and most likely will go away at some point, it is also not easy to implement and might slow things down, i.e. trying to apply the logic on Kconfigs.
Another option is to have a board definition file in the board directory that is easily parseable and can provide such information to sanity but also can also serve other goals. So in this case both board and test case would broadcast they support a certain feature and sanitycheck would match them and run the test for that platform.
by Javier B Perez:
Anas Nashif what do you think?
Option #4: as you mentioned, a file per board where the board list the supported features (config names).
boards/
QMSI
WATCHDOG
RTC
GPIO
PWM
I2C
AIO_COMPARATOR
SPI
...
by Anas Nashif:
good start, but lets make this future proof and lets use some well known and well supported syntax, I was thinking something along the lines of this:
https://github.com/ARMmbed/mbed-os/blob/master/targets/targets.json
the above is one big file for everything, we do not need to do the same thing, we could have a json per board, for example:
{
"name": "frdm_k64f",
"supported_form_factors": ["ARDUINO"],
"core": "Cortex-M4F",
"supported_toolchains": ["ARM", "ZEPHYR"],
"extra_labels": ["Freescale", "KSDK2_MCUS", "FRDM", "KPSDK_MCUS", "KPSDK_CODE", "MCU_K64F"],
"macros": ["CPU_MK64FN1M0VMD12", "FSL_RTOS_MBED"],
"inherits": ["ArmTarget"],
"detect_code": ["0240"],
"device_has": ["ANALOGIN", "ANALOGOUT", "ERROR_RED", "I2C", "I2CSLAVE", "INTERRUPTIN", "LOWPOWERTIMER", "PORTIN", "PORTINOUT", "PORTOUT", "PWMOUT", "RTC", "SERIAL", "SERIAL_FC", "SLEEP", "SPI", "SPISLAVE", "STDIO_MESSAGES", "STORAGE", "TRNG"],
"features": ["LWIP", "STORAGE"],
"supported": ["I2C", "SPI"],
"release_versions": ["2", "5"],
"device_name": "MK64FN1M0xxx12"
},
per architecture we could have some global variables that can be inherited.
Lots of possibilities, lets not create something that is sanitycheck specific please :-)
by Andrew Boie:
I like where Anas Nashif is going with this and I like how the proposal supports the concept of inheritance, over time I imagine we will be supporting lots of boards which are just minor variants of each other.
by Javier B Perez:
Anas Nashif and Andrew Boie : Here is the base structure "Target" from mbed:
"Target": {
"core": null, <- SoC
"default_toolchain": "zephyr", <- Default toolchain
"supported_toolchains": null, <- Toolchains (Zephyr, ARM, ARC, ISSM)
"is_disk_virtual": false, <- I don't see a usage for this
"extra_labels": [], <- Lables for the board/target
"macros": [], <- I don't see a usage for this
"device_has": [], <- Like I2C
"features": [], <- I don't see the need for device_has and features
"detect_code": [], <- I don't see the need for this
"public": false, <- I don't see the usage for this
"default_lib": "std" <- Supported libraries like minimal and standard
},
Then we can create super targets for each SoC family based on this.
by Inaky Perez-Gonzalez:
TCF does basically that:
$ tcf list -vvvvv qc1000-24
https://localhost:5000/ttb-v1/targets/qc1000-24
bsp_models: arc x86 x86+arc
bsps: {u'arc': {u'board': u'quark_se_c1000_ss_devboard',
u'console': u'',
u'kernel': [u'unified', u'nano'],
u'kernelname': u'zephyr.bin'},
u'x86': {u'board': u'quark_se_c1000_devboard',
u'console': u'',
u'kernel': [u'unified', u'micro', u'nano'],
u'kernelname': u'zephyr.bin'}}
consoles: 1
disabled: True
fullid: local/qc1000-24
id: qc1000-24
interfaces: test_target_console_mixin test_target_images_mixin tt_debug_mixin tt_power_control_mixin
owner: None
powered: False
quark_se_stub: yes
rtb: <tcfl.ttb_client.rest_target_broker object at 0x7f58a0a5cd90>
type: ma
This is formatted, but it is a dictionary of key/values per target instance; in our case, because we have real HW instances there might be instances that have hardware that other instances don't have, and thus the tags might differ to denote those differences.
But it, as described before, should assume you have a central set of key/values per target type and you can do subinstantiations of
I support going this way, no need to re-invent the wheel some more.
The actual set of tags you match on has to be "ad-hoc-formalized" on, to consistently describe what the HW supports. The one that has been more critical (IMHO) has been the BSP support to describe each core in multi-core systems (like Quark SE) as you need to pose there the information needed for an app to be built solely from that.
Reported by Andrew Boie:
As a driver author, I would like to be able to specify that a driver testcase should be built/run only on those boards that actually feature that peripheral, so that all boards that have that peripheral are tested.
At the moment, few of our driver tests are even built by sanitycheck, making them susceptible to bit-rot.
Currently, there isn't a good way to do this:
One approach that might work is to define a set of CONFIG_HAVE_XXXX for every board where XXXX is the name of the peripheral. This could be a soc-level define for integrated SOC peripherals.
(Imported from Jira ZEP-1843)