Open garlick opened 1 month ago
I noticed some unintentional turduckens here while playing with the tracing tools in flux-framework/flux-core#6345.
This is a trace of sched-fluxion-resource when a single core job is submitted:
sched-fluxion-resource
(match_orelse_reserve and match_multi might even be turduckafishes or wolfturduckens?)
match_orelse_reserve
match_multi
$ sudo flux module trace -H --full sched-fluxion-resource [Oct03 20:10] sched-fluxion-resource rx > feasibility.check [572] { "jobspec": { "resources": [ { "type": "slot", "count": 1, "with": [ { "type": "core", "count": 1 } ], "label": "task" } ], "tasks": [ { "command": [ "hostname" ], "slot": "task", "count": { "per_slot": 1 } } ], "attributes": { "system": { "duration": 30.0, "environment": {}, "cwd": "/nfshome/garlick", "shell": { "options": { "rlimit": { "cpu": -1, "fsize": -1, "data": -1, "stack": -1, "core": 16384, "nofile": 1024, "as": -1, "rss": -1, "nproc": 8192 } } }, "queue": "admin", "constraints": { "properties": [ "admin" ] } } }, "version": 1 }, "userid": 5588, "rolemask": 6, "urgency": 16, "flags": 0 } [ +52.272224] sched-fluxion-resource tx < feasibility.check [0] [ +52.295303] sched-fluxion-resource rx > sched-fluxion-resource.match_multi [763] { "cmd": "allocate_orelse_reserve", "jobs": "[{\"jobid\": 12431747522232320, \"jobspec\": \"{\\\"resources\\\":[{\\\"type\\\":\\\"slot\\\",\\\"count\\\":1,\\\"with\\\":[{\\\"type\\\":\\\"core\\\",\\\"count\\\":1}],\\\"label\\\":\\\"task\\\"}],\\\"tasks\\\":[{\\\"command\\\":[\\\"hostname\\\"],\\\"slot\\\":\\\"task\\\",\\\"count\\\":{\\\"per_slot\\\":1}}],\\\"attributes\\\":{\\\"system\\\":{\\\"duration\\\":30.0,\\\"cwd\\\":\\\"/nfshome/garlick\\\",\\\"shell\\\":{\\\"options\\\":{\\\"rlimit\\\":{\\\"cpu\\\":-1,\\\"fsize\\\":-1,\\\"data\\\":-1,\\\"stack\\\":-1,\\\"core\\\":16384,\\\"nofile\\\":1024,\\\"as\\\":-1,\\\"rss\\\":-1,\\\"nproc\\\":8192}}},\\\"queue\\\":\\\"admin\\\",\\\"constraints\\\":{\\\"properties\\\":[\\\"admin\\\"]}}},\\\"version\\\":1}\"}]" } [ +52.295923] sched-fluxion-resource tx < sched-fluxion-resource.match_multi [342] { "jobid": 12431747522232320, "status": "ALLOCATED", "overhead": 0.00044450099999999998, "R": "{\"version\": 1, \"execution\": {\"R_lite\": [{\"rank\": \"0\", \"children\": {\"core\": \"0-3\"}}], \"nodelist\": [\"picl0\"], \"properties\": {\"8g\": \"0\", \"admin\": \"0\"}, \"starttime\": 1728011452, \"expiration\": 1728011482}}\n", "at": 1728011452 } [ +52.296013] sched-fluxion-resource tx < sched-fluxion-resource.match_multi [0] [ +57.281774] sched-fluxion-resource rx > feasibility.disconnect [0] [Oct03 20:11] sched-fluxion-resource rx > sched-fluxion-resource.partial-cancel [236] { "jobid": 12431747522232320, "R": "{\"version\":1,\"execution\":{\"R_lite\":[{\"rank\":\"0\",\"children\":{\"core\":\"0-3\"}}],\"starttime\":0.0,\"expiration\":0.0,\"nodelist\":[\"picl0\"],\"properties\":{\"8g\":\"0\",\"admin\":\"0\"}}}" } [ +2.518832] sched-fluxion-resource tx < sched-fluxion-resource.partial-cancel [19] { "full-removal": 1 }
I noticed some unintentional turduckens here while playing with the tracing tools in flux-framework/flux-core#6345.
This is a trace of
sched-fluxion-resource
when a single core job is submitted:(
match_orelse_reserve
andmatch_multi
might even be turduckafishes or wolfturduckens?)