Open cnotv opened 2 years ago
I think it would be a great idea to clean up the part of the code base that is doing this. It comes down to two functions in the store called LoadManagement
and LoadCluster
.
Not only are the API calls repetitive, but these two functions are also confusing and hard to read. I believe that refactoring them could be good for both code maintainability and performance. See points 4 and 5 of my related tech debt issue https://github.com/rancher/dashboard/issues/6882 and Richard's comment https://github.com/rancher/dashboard/issues/6882#issuecomment-1246443387.
Note the repeated use of await
below. As @Sean-McQ has pointed out in the past, it is not ideal to use await
in combination with dispatch
. It would be better to make the API call without waiting for the response, and without blocking other parts of the UI while that is loading. Any part of the UI that depends on a getter in vuex will just take the default value, then will be automatically updated when the value from the getter changes.
Both functions are in shell/store/index.js
. Here they are:
async loadManagement({
getters, state, commit, dispatch, rootGetters
}) {
if ( state.managementReady) {
// Do nothing, it's already loaded
return;
}
console.log('Loading management...'); // eslint-disable-line no-console
try {
await dispatch('rancher/findAll', { type: NORMAN.PRINCIPAL, opt: { url: 'principals' } });
} catch (e) {
// Maybe not Rancher
}
let res = await allHashSettled({
mgmtSubscribe: dispatch('management/subscribe'),
mgmtSchemas: dispatch('management/loadSchemas', true),
rancherSchemas: dispatch('rancher/loadSchemas', true),
});
const promises = {
// Clusters guaranteed always available or your money back
clusters: dispatch('management/findAll', {
type: MANAGEMENT.CLUSTER,
opt: { url: MANAGEMENT.CLUSTER }
}),
// Features checks on its own if they are available
features: dispatch('features/loadServer'),
};
const isRancher = res.rancherSchemas.status === 'fulfilled' && !!getters['management/schemaFor'](MANAGEMENT.PROJECT);
if ( isRancher ) {
promises['prefs'] = dispatch('prefs/loadServer');
promises['rancherSubscribe'] = dispatch('rancher/subscribe');
}
if ( getters['management/schemaFor'](COUNT) ) {
promises['counts'] = dispatch('management/findAll', { type: COUNT });
}
if ( getters['management/canList'](MANAGEMENT.SETTING) ) {
promises['settings'] = dispatch('management/findAll', { type: MANAGEMENT.SETTING });
}
if ( getters['management/schemaFor'](NAMESPACE) ) {
promises['namespaces'] = dispatch('management/findAll', { type: NAMESPACE });
}
const fleetSchema = getters['management/schemaFor'](FLEET.WORKSPACE);
if (fleetSchema?.links?.collection) {
promises['workspaces'] = dispatch('management/findAll', { type: FLEET.WORKSPACE });
}
res = await allHash(promises);
dispatch('i18n/init');
let isMultiCluster = true;
if ( res.clusters.length === 1 && res.clusters[0].metadata?.name === 'local' ) {
isMultiCluster = false;
}
const pl = res.settings?.find(x => x.id === 'ui-pl')?.value;
const brand = res.settings?.find(x => x.id === SETTING.BRAND)?.value;
const systemNamespaces = res.settings?.find(x => x.id === SETTING.SYSTEM_NAMESPACES);
if ( pl ) {
setVendor(pl);
}
if (brand) {
setBrand(brand);
}
if (systemNamespaces) {
const namespace = (systemNamespaces.value || systemNamespaces.default)?.split(',');
commit('setSystemNamespaces', namespace);
}
commit('managementChanged', {
ready: true,
isMultiCluster,
isRancher,
});
if ( res.workspaces ) {
commit('updateWorkspace', {
value: getters['prefs/get'](WORKSPACE),
all: res.workspaces,
getters
});
}
console.log(`Done loading management; isRancher=${ isRancher }; isMultiCluster=${ isMultiCluster }`); // eslint-disable-line no-console
},
async loadCluster({
state, commit, dispatch, getters
}, {
id, product, oldProduct, oldPkg, newPkg
}) {
const sameCluster = state.clusterId && state.clusterId === id;
const samePackage = oldPkg?.name === newPkg?.name;
const isMultiCluster = getters['isMultiCluster'];
// Are we in the same cluster and package?
if ( sameCluster && samePackage) {
// Do nothing, we're already connected/connecting to this cluster
return;
}
const oldPkgClusterStore = oldPkg?.stores.find(
s => getters[`${ s.storeName }/isClusterStore`]
)?.storeName;
const newPkgClusterStore = newPkg?.stores.find(
s => getters[`${ s.storeName }/isClusterStore`]
)?.storeName;
const productConfig = state['type-map']?.products?.find(p => p.name === product);
const forgetCurrentCluster = ((state.clusterId && id) || !samePackage) && !productConfig?.inExplorer;
// Should we leave/forget the current cluster? Only if we're going from an existing cluster to a new cluster, or the package has changed
// (latter catches cases like nav from explorer cluster A to epinio cluster A)
// AND if the product not scoped to the explorer - a case for products that only exist within the explorer (i.e. Kubewarden)
if ( forgetCurrentCluster ) {
// Clear the old cluster state out if switching to a new one.
// If there is not an id then stay connected to the old one behind the scenes,
// so that the nav and header stay the same when going to things like prefs
commit('clusterReady', false);
commit('clusterId', undefined);
await dispatch('cluster/unsubscribe');
commit('cluster/reset');
await dispatch('management/watch', {
type: MANAGEMENT.PROJECT,
namespace: state.clusterId,
stop: true
});
commit('management/forgetType', MANAGEMENT.PROJECT);
commit('catalog/reset');
if (oldPkgClusterStore) {
// Mirror actions on the 'cluster' store for our specific pkg `cluster` store
await dispatch(`${ oldPkgClusterStore }/unsubscribe`);
await commit(`${ oldPkgClusterStore }/reset`);
}
}
if ( id ) {
// Remember the current cluster
dispatch('prefs/set', { key: CLUSTER_PREF, value: id });
commit('clusterId', id);
// Use a pseudo cluster ID to pretend we have a cluster... to ensure some screens that don't care about a cluster but 'require' one to show
if (id === BLANK_CLUSTER) {
commit('clusterReady', true);
return;
}
} else {
// Switching to a global page with no cluster id, keep it the same.
return;
}
console.log(`Loading ${ isMultiCluster ? 'ECM ' : '' }cluster...`); // eslint-disable-line no-console
// If we've entered a new store ensure everything has loaded correctly
if (newPkgClusterStore) {
// Mirror actions on the 'cluster' store for our specific pkg `cluster` store
await dispatch(`${ newPkgClusterStore }/loadCluster`, { id });
commit('clusterReady', true);
console.log('Done loading pkg cluster:', newPkgClusterStore); // eslint-disable-line no-console
// Everything below here is rancher/kube cluster specific
return;
}
// Execute Rancher cluster specific code
// This is a workaround for a timing issue where the mgmt cluster schema may not be available
// Try and wait until the schema exists before proceeding
await dispatch('management/waitForSchema', { type: MANAGEMENT.CLUSTER });
// See if it really exists
try {
const cluster = await dispatch('management/find', {
type: MANAGEMENT.CLUSTER,
id,
opt: { url: `${ MANAGEMENT.CLUSTER }s/${ escape(id) }` }
});
if (!cluster.isReady) {
// Treat an unready cluster the same as a missing one. This ensures that we safely take user to the home page instead of showing
// an error page (useful if they've set the cluster as their home page and don't want to change their landing location)
console.warn('Cluster is not ready, cannot load it:', cluster.nameDisplay); // eslint-disable-line no-console
throw new Error('Unready cluster');
}
} catch {
commit('clusterId', null);
commit('cluster/applyConfig', { baseUrl: null });
throw new ClusterNotFoundError(id);
}
const clusterBase = `/k8s/clusters/${ escape(id) }/v1`;
// Update the Steve client URLs
commit('cluster/applyConfig',
{ baseUrl: clusterBase });
await Promise.all([
dispatch('cluster/loadSchemas', true),
]);
dispatch('cluster/subscribe');
const projectArgs = {
type: MANAGEMENT.PROJECT,
opt: {
url: `${ MANAGEMENT.PROJECT }/${ escape(id) }`,
watchNamespace: id
}
};
const fetchProjects = async() => {
let limit = 30000;
const sleep = 100;
while ( limit > 0 && !state.managementReady ) {
await setTimeout(() => {}, sleep);
limit -= sleep;
}
if ( getters['management/schemaFor'](MANAGEMENT.PROJECT) ) {
return dispatch('management/findAll', projectArgs);
}
};
const res = await allHash({
projects: fetchProjects(),
counts: dispatch('cluster/findAll', { type: COUNT }),
namespaces: dispatch('cluster/findAll', { type: NAMESPACE }),
navLinks: !!getters['cluster/schemaFor'](UI.NAV_LINK) && dispatch('cluster/findAll', { type: UI.NAV_LINK }),
});
await dispatch('cleanNamespaces');
const filters = getters['prefs/get'](NAMESPACE_FILTERS)?.[id];
commit('updateNamespaces', {
filters: filters || [ALL_USER],
all: res.namespaces,
...getters
});
commit('clusterReady', true);
console.log('Done loading cluster.'); // eslint-disable-line no-console
}
Looks like that's on a cluster detail page where those resources are required. I've given this another look with 2.7.5-rc3 and some of the duplicate requests (/v1/namespaces, /v1/counts) aren't now made. The duplicates for gets and sets to /v1/userpreferences need to be sorted. From what i saw there was no difference in the data we sent or recieved
Detailed Description Remove duplicated requests on cluster loaded.
Context While loading clusters, on top of the high amount of requests (31) in case of first load, some are duplicated and supposedly not necessary.
This is the sorted list, which helps to identify the duplicated requests:
This is the unsorted version which may help to understand the source of the requests:
To generate this list simply inspect the network or logs in the watched run.