Closed dongho-jung closed 5 months ago
For reference, I specified the install directory and completely reinstalled everything from the Steampipe backend database to the plugin. After about 10 more experiments, the issue I described, and the symptoms shared in the Slack thread, were consistently repeated.
Hi @dongho-jung, I apologize for the inconvenience. I have a few follow-up questions regarding the issue you mentioned. Please review them:
Hi @ParthaI 😄
If I understand correctly, In your environment the AWS EKS cluster has been configured, right?
Yes you're right. I configured 3 EKS cluster environments
Is this issue occurring with all CRD tables or just a specific one?
I tried experimenting with different clusters and resources about 8 times, and other CRDs in different clusters had the same issue.
Specifically, in the case of the GitHub Actions CRD RunnerSet, up to 7 fields were missing. The original report was for the karpenter.ec2nodeclass CRD in the dev cluster, but below is the actions.runnerset CRD from the main cluster.
6th attempt
create foreign table k8s_main.kubernetes_runnerset
(
name text,
uid text,
kind text,
api_version text,
namespace text,
creation_timestamp timestamp with time zone,
labels jsonb,
context_name text,
source_type text,
annotations jsonb,
docker_enabled boolean,
enterprise text,
volume_size_limit jsonb,
"group" text,
ordinals jsonb,
replicas bigint,
effective_time text,
min_ready_seconds bigint,
service_account_name text,
service_name text,
template jsonb,
volume_claim_templates jsonb,
volume_storage_medium text,
dockerd_within_runner_container boolean,
github_api_credentials_from jsonb,
image text,
spec_labels jsonb,
status_replicas bigint,
path text,
start_line bigint,
end_line bigint,
sp_connection_name text,
sp_ctx jsonb,
_ctx jsonb
)
server steampipe
options (table 'kubernetes_runnerset');
comment on foreign table k8s_main.kubernetes_runnerset is 'RunnerSet is the Schema for the runnersets API. Custom resource for runnersets.actions.summerwind.dev.';
... <skipped>
comment on column k8s_main.kubernetes_runnerset._ctx is 'Steampipe context in JSON form.';
alter foreign table k8s_main.kubernetes_runnerset
owner to root;
grant select on k8s_main.kubernetes_runnerset to steampipe_users;
8th attempt
create foreign table k8s_main.kubernetes_runnerset
(
name text,
uid text,
kind text,
api_version text,
namespace text,
creation_timestamp timestamp with time zone,
labels jsonb,
context_name text,
source_type text,
annotations jsonb,
work_volume_claim_template jsonb,
container_mode text,
docker_mtu bigint,
ephemeral boolean,
github_api_credentials_from jsonb,
service_name text,
volume_size_limit jsonb,
work_dir text,
docker_registry_mirror text,
min_ready_seconds bigint,
pod_management_policy text,
revision_history_limit bigint,
template jsonb,
image text,
replicas bigint,
service_account_name text,
update_strategy jsonb,
enterprise text,
volume_storage_medium text,
"group" text,
spec_labels jsonb,
available_replicas bigint,
desired_replicas bigint,
ready_replicas bigint,
status_replicas bigint,
path text,
start_line bigint,
end_line bigint,
sp_connection_name text,
sp_ctx jsonb,
_ctx jsonb
)
server steampipe
options (table 'kubernetes_runnerset');
comment on foreign table k8s_main.kubernetes_runnerset is 'RunnerSet is the Schema for the runnersets API. Custom resource for runnersets.actions.summerwind.dev.';
... <skipped>
comment on column k8s_main.kubernetes_runnerset._ctx is 'Steampipe context in JSON form.';
alter foreign table k8s_main.kubernetes_runnerset
owner to root;
grant select on k8s_main.kubernetes_runnerset to steampipe_users;
Hello @dongho-jung, I wanted to share an update with you. I've delved deeper into the issues and successfully replicated one on my end. Here are the details:
Status
, leading to missing columns.In my case I have CRD tables, I targeted a single table named kubernetes_cninode
.
features
under both spec
and status
.> select * from kubernetes_cninode
+------+-----+------+-------------+-----------+--------------------+--------+--------------+-------------+-------------+----------+-----------------+------+------------+----------+--------------------+--------+------+
| name | uid | kind | api_version | namespace | creation_timestamp | labels | context_name | source_type | annotations | features | status_features | path | start_line | end_line | sp_connection_name | sp_ctx | _ctx |
+------+-----+------+-------------+-----------+--------------------+--------+--------------+-------------+-------------+----------+-----------------+------+------------+----------+--------------------+--------+------+
+------+-----+------+-------------+-----------+--------------------+--------+--------------+-------------+-------------+----------+-----------------+------+------------+----------+--------------------+--------+------+
features
attribute under status
did not populate.> select * from kubernetes_cninode
+------+-----+------+-------------+-----------+--------------------+--------+--------------+-------------+-------------+----------+------+------------+----------+--------------------+--------+------+
| name | uid | kind | api_version | namespace | creation_timestamp | labels | context_name | source_type | annotations | features | path | start_line | end_line | sp_connection_name | sp_ctx | _ctx |
+------+-----+------+-------------+-----------+--------------------+--------+--------------+-------------+-------------+----------+------+------------+----------+--------------------+--------+------+
+------+-----+------+-------------+-----------+--------------------+--------+--------------+-------------+-------------+----------+------+------------+----------+--------------------+--------+------+
I explored the issue further using standalone Go code to investigate the attributes under spec
and status
. I suspect the problem may originate from the API we are utilizing. It's unclear whether this is intended behavior or not. Unfortunately, I couldn't determine the specific circumstances when the Status
structure's attributes are not returned.
Below is the standalone code I used:
package main
import (
"context"
"fmt"
"log"
"k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
// v1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/client-go/tools/clientcmd"
clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
)
func main() {
// Assuming the path to your kubeconfig is correctly provided
kubeconfigPath := "/apthe/to/.kube/config"
// Prepare the configuration using the specified context
configLoadingRules := clientcmd.ClientConfigLoadingRules{ExplicitPath: kubeconfigPath}
configOverrides := clientcmd.ConfigOverrides{
CurrentContext: "<Current Context>", // Replace with your context name
Context: clientcmdapi.Context{},
}
// Build the configuration from the rules and overrides
config, err := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(&configLoadingRules, &configOverrides).ClientConfig()
if err != nil {
log.Fatalf("Error building kubeconfig: %s", err)
}
// Create a Clientset for apiextensions
apiextensionsClient, err := clientset.NewForConfig(config)
if err != nil {
log.Fatalf("Failed to create apiextensions client: %s", err)
}
// Get CustomResourceDefinitions
crds, err := apiextensionsClient.ApiextensionsV1().CustomResourceDefinitions().List(context.TODO(), metav1.ListOptions{})
if err != nil {
log.Fatalf("Failed to get CustomResourceDefinitions: %s", err)
}
fmt.Printf("There are %d CRDs in the cluster\n", len(crds.Items))
for _, crd := range crds.Items {
if len(crd.Spec.Versions) > 0 {
for _, version := range crd.Spec.Versions {
if version.Served {
if version.Schema != nil && version.Schema.OpenAPIV3Schema != nil {
schemaSpec := version.Schema.OpenAPIV3Schema.Properties["spec"]
for k, _ := range schemaSpec.Properties {
fmt.Printf("Specs Key ====>>> %s \n\n", k)
}
schemaStatus := version.Schema.OpenAPIV3Schema.Properties["status"]
if schemaStatus.Properties != nil {
data := schemaStatus.Properties
for k, _ := range data {
fmt.Printf("Status Key ====>>> %s \n\n", k)
}
}
// fmt.Printf("Specs ====>>> %+v \n\n", version.Schema.OpenAPIV3Schema.Properties["spec"])
// fmt.Printf("Status ====>>> %+v \n\n", version.Schema.OpenAPIV3Schema.Properties["status"])
}
}
}
}
}
}
I'll continue to investigate this issue further. If you have any additional context or insights, please do share them.
Thank you!
Thank you for the prompt and professional sharing of information. I'm glad that a reproducible standalone has been developed. I expect this will accelerate troubleshooting.
Please feel free to mention me if there is anything you need from me.
Have a nice day : )
Hi @ParthaI
I tried to verify if there were any intermittent drops occurring from CustomResourceDefinitions().List() using the code you provided. To facilitate easy diffing, I sorted the CRD and internal keys alphabetically. However, after testing about five times, I did not observe any noticeable drops from this method.
package main
import (
"context"
"fmt"
"log"
"sort"
"k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/tools/clientcmd"
clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
)
func main() {
// Assuming the path to your kubeconfig is correctly provided
kubeconfigPath := "<YourConfigPath>" // "/Users/dongho/.kube/config"
// Prepare the configuration using the specified context
configLoadingRules := clientcmd.ClientConfigLoadingRules{ExplicitPath: kubeconfigPath}
configOverrides := clientcmd.ConfigOverrides{
CurrentContext: "<YourContext>", // "dev"
Context: clientcmdapi.Context{},
}
// Build the configuration from the rules and overrides
config, err := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(&configLoadingRules, &configOverrides).ClientConfig()
if err != nil {
log.Fatalf("Error building kubeconfig: %s", err)
}
// Create a Clientset for apiextensions
apiextensionsClient, err := clientset.NewForConfig(config)
if err != nil {
log.Fatalf("Failed to create apiextensions client: %s", err)
}
// Get CustomResourceDefinitions
crds, err := apiextensionsClient.ApiextensionsV1().CustomResourceDefinitions().List(context.TODO(), metav1.ListOptions{})
if err != nil {
log.Fatalf("Failed to get CustomResourceDefinitions: %s", err)
}
crdDetails := make(map[string]map[string][]string)
for _, crd := range crds.Items {
crdName := crd.Name
crdDetails[crdName] = make(map[string][]string)
if len(crd.Spec.Versions) > 0 {
for _, version := range crd.Spec.Versions {
if version.Served {
if version.Schema != nil && version.Schema.OpenAPIV3Schema != nil {
schemaSpec := version.Schema.OpenAPIV3Schema.Properties["spec"]
specKeys := []string{}
for k := range schemaSpec.Properties {
specKeys = append(specKeys, k)
}
sort.Strings(specKeys)
crdDetails[crdName]["spec"] = specKeys
schemaStatus := version.Schema.OpenAPIV3Schema.Properties["status"]
statusKeys := []string{}
if schemaStatus.Properties != nil {
for k := range schemaStatus.Properties {
statusKeys = append(statusKeys, k)
}
}
sort.Strings(statusKeys)
crdDetails[crdName]["status"] = statusKeys
}
}
}
}
}
crdNames := make([]string, 0, len(crdDetails))
for name := range crdDetails {
crdNames = append(crdNames, name)
}
sort.Strings(crdNames)
fmt.Printf("There are %d CRDs in the cluster\n", len(crds.Items))
for _, crdName := range crdNames {
fmt.Printf("CRD Name: %s\n", crdName)
if specKeys, ok := crdDetails[crdName]["spec"]; ok {
fmt.Printf(" Spec Keys:\n")
for _, key := range specKeys {
fmt.Printf(" - %s\n", key)
}
}
if statusKeys, ok := crdDetails[crdName]["status"]; ok {
fmt.Printf(" Status Keys:\n")
for _, key := range statusKeys {
fmt.Printf(" - %s\n", key)
}
}
fmt.Println()
}
}
However, after a few attempts the features attribute under status did not populate.
Upon further review, although the order changed slightly, it appears that the features are present. This phenomenon might be due to another part of the system.
Hi @dongho-jung, I appreciate your help and cooperation.
I tried to verify if there were any intermittent drops occurring from CustomResourceDefinitions().List() using the code you provided. To facilitate easy diffing, I sorted the CRD and internal keys alphabetically. However, after testing about five times, I did not observe any noticeable drops from this method.
I also tried more than 20 times and consistently got the same result. In my case, the features
property always appears under spec
, and I never see it under status
.
Did you notice that the dropped columns (status.amis
and status.subnets
) are consistently returned from that standalone code?
Upon further review, although the order changed slightly, it appears that the features are present. This phenomenon might be due to another part of the system.
When I queried the table, there were initially two columns: features
and status_features
. After a few attempts, the status_features
column was no longer visible. I am concerned about the scenarios/conditions in which the API might fail to return this property.
Upon further investigation, the issue might be due to several factors, including:
I am unsure what the actual cause is in our case or if it is a bug from the API end.
Still, you are encountering the same issue as you faced earlier(sometimes .status.amis is missing, other times .status.subnets is missing, and sometimes both are missing or both are present.
)?
Did you notice that the dropped columns (status.amis and status.subnets) are consistently returned from that standalone code?
That's strange. When I ran the standalone code about 8 times, I could no longer reproduce the issue. (I did make some minor modifications for better visibility, though.)
~/tmp/2024-05-30
❯ go run main.go 11:55:30
There are 1 CRDs containing 'nodeclass' in the cluster
CRD Name: ec2nodeclasses.karpenter.k8s.aws
Spec Keys: #13
- amiFamily
- amiSelectorTerms
- blockDeviceMappings
- context
- detailedMonitoring
- instanceProfile
- instanceStorePolicy
- metadataOptions
- role
- securityGroupSelectorTerms
- subnetSelectorTerms
- tags
- userData
Status Keys: #4
- amis
- instanceProfile
- securityGroups
- subnets
~/tmp/2024-05-30 6s
❯ go run main.go 11:59:35
There are 1 CRDs containing 'nodeclass' in the cluster
CRD Name: ec2nodeclasses.karpenter.k8s.aws
Spec Keys: #13
- amiFamily
- amiSelectorTerms
- blockDeviceMappings
- context
- detailedMonitoring
- instanceProfile
- instanceStorePolicy
- metadataOptions
- role
- securityGroupSelectorTerms
- subnetSelectorTerms
- tags
- userData
Status Keys: #4
- amis
- instanceProfile
- securityGroups
- subnets
~/tmp/2024-05-30
❯ go run main.go 11:59:50
There are 1 CRDs containing 'nodeclass' in the cluster
CRD Name: ec2nodeclasses.karpenter.k8s.aws
Spec Keys: #13
- amiFamily
- amiSelectorTerms
- blockDeviceMappings
- context
- detailedMonitoring
- instanceProfile
- instanceStorePolicy
- metadataOptions
- role
- securityGroupSelectorTerms
- subnetSelectorTerms
- tags
- userData
Status Keys: #4
- amis
- instanceProfile
- securityGroups
- subnets
~/tmp/2024-05-30
❯ go run main.go 12:00:02
There are 1 CRDs containing 'nodeclass' in the cluster
CRD Name: ec2nodeclasses.karpenter.k8s.aws
Spec Keys: #13
- amiFamily
- amiSelectorTerms
- blockDeviceMappings
- context
- detailedMonitoring
- instanceProfile
- instanceStorePolicy
- metadataOptions
- role
- securityGroupSelectorTerms
- subnetSelectorTerms
- tags
- userData
Status Keys: #4
- amis
- instanceProfile
- securityGroups
- subnets
~/tmp/2024-05-30
❯ go run main.go 12:00:09
There are 1 CRDs containing 'nodeclass' in the cluster
CRD Name: ec2nodeclasses.karpenter.k8s.aws
Spec Keys: #13
- amiFamily
- amiSelectorTerms
- blockDeviceMappings
- context
- detailedMonitoring
- instanceProfile
- instanceStorePolicy
- metadataOptions
- role
- securityGroupSelectorTerms
- subnetSelectorTerms
- tags
- userData
Status Keys: #4
- amis
- instanceProfile
- securityGroups
- subnets
~/tmp/2024-05-30
❯ go run main.go 12:00:15
There are 1 CRDs containing 'nodeclass' in the cluster
CRD Name: ec2nodeclasses.karpenter.k8s.aws
Spec Keys: #13
- amiFamily
- amiSelectorTerms
- blockDeviceMappings
- context
- detailedMonitoring
- instanceProfile
- instanceStorePolicy
- metadataOptions
- role
- securityGroupSelectorTerms
- subnetSelectorTerms
- tags
- userData
Status Keys: #4
- amis
- instanceProfile
- securityGroups
- subnets
~/tmp/2024-05-30
❯ go run main.go 12:00:22
There are 1 CRDs containing 'nodeclass' in the cluster
CRD Name: ec2nodeclasses.karpenter.k8s.aws
Spec Keys: #13
- amiFamily
- amiSelectorTerms
- blockDeviceMappings
- context
- detailedMonitoring
- instanceProfile
- instanceStorePolicy
- metadataOptions
- role
- securityGroupSelectorTerms
- subnetSelectorTerms
- tags
- userData
Status Keys: #4
- amis
- instanceProfile
- securityGroups
- subnets
~/tmp/2024-05-30
❯ go run main.go 12:00:30
There are 1 CRDs containing 'nodeclass' in the cluster
CRD Name: ec2nodeclasses.karpenter.k8s.aws
Spec Keys: #13
- amiFamily
- amiSelectorTerms
- blockDeviceMappings
- context
- detailedMonitoring
- instanceProfile
- instanceStorePolicy
- metadataOptions
- role
- securityGroupSelectorTerms
- subnetSelectorTerms
- tags
- userData
Status Keys: #4
- amis
- instanceProfile
- securityGroups
- subnets
this is the modified code
package main
import (
"context"
"fmt"
"log"
"sort"
"strings"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/tools/clientcmd"
clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
)
func main() {
kubeconfigPath := "/Users/dongho/.kube/config"
configLoadingRules := clientcmd.ClientConfigLoadingRules{ExplicitPath: kubeconfigPath}
configOverrides := clientcmd.ConfigOverrides{
CurrentContext: "dev",
Context: clientcmdapi.Context{},
}
config, err := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(&configLoadingRules, &configOverrides).ClientConfig()
if err != nil {
log.Fatalf("Error building kubeconfig: %s", err)
}
crdClient, err := apiextensionsv1.NewForConfig(config)
if err != nil {
log.Fatalf("Failed to create apiextensions client: %s", err)
}
crds, err := crdClient.ApiextensionsV1().CustomResourceDefinitions().List(context.TODO(), metav1.ListOptions{})
if err != nil {
log.Fatalf("Failed to get CustomResourceDefinitions: %s", err)
}
crdDetails := make(map[string]map[string][]string)
for _, crd := range crds.Items {
crdName := crd.Name
if !strings.Contains(strings.ToLower(crdName), "nodeclass") {
continue
}
crdDetails[crdName] = make(map[string][]string)
if len(crd.Spec.Versions) > 0 {
for _, version := range crd.Spec.Versions {
if version.Served {
if version.Schema != nil && version.Schema.OpenAPIV3Schema != nil {
schemaSpec := version.Schema.OpenAPIV3Schema.Properties["spec"]
specKeys := []string{}
for k := range schemaSpec.Properties {
specKeys = append(specKeys, k)
}
sort.Strings(specKeys)
crdDetails[crdName]["spec"] = specKeys
schemaStatus := version.Schema.OpenAPIV3Schema.Properties["status"]
statusKeys := []string{}
if schemaStatus.Properties != nil {
for k := range schemaStatus.Properties {
statusKeys = append(statusKeys, k)
}
}
sort.Strings(statusKeys)
crdDetails[crdName]["status"] = statusKeys
}
}
}
}
}
crdNames := make([]string, 0, len(crdDetails))
for name := range crdDetails {
crdNames = append(crdNames, name)
}
sort.Strings(crdNames)
fmt.Printf("There are %d CRDs containing 'nodeclass' in the cluster\n", len(crdDetails))
for _, crdName := range crdNames {
fmt.Printf("CRD Name: %s\n", crdName)
if specKeys, ok := crdDetails[crdName]["spec"]; ok {
fmt.Printf(" Spec Keys: #%d\n", len(specKeys))
for _, key := range specKeys {
fmt.Printf(" - %s\n", key)
}
}
if statusKeys, ok := crdDetails[crdName]["status"]; ok {
fmt.Printf(" Status Keys: #%d\n", len(statusKeys))
for _, key := range statusKeys {
fmt.Printf(" - %s\n", key)
}
}
fmt.Println()
}
}
After a few attempts, the status_features column was no longer visible.
Ah, I see that I made a mistake regarding that part. The second table you uploaded shows the features, but it does not show the status_features. I apologize for the confusion.
Not just looking at a single CRD, but considering all the keys of all CRDs, the total number of keys consistently comes out to 1218 when I run it multiple times.
package main
import (
"context"
"fmt"
"log"
"sort"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/tools/clientcmd"
clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
)
func main() {
kubeconfigPath := "/Users/dongho/.kube/config"
configLoadingRules := clientcmd.ClientConfigLoadingRules{ExplicitPath: kubeconfigPath}
configOverrides := clientcmd.ConfigOverrides{
CurrentContext: "dev",
Context: clientcmdapi.Context{},
}
config, err := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(&configLoadingRules, &configOverrides).ClientConfig()
if err != nil {
log.Fatalf("Error building kubeconfig: %s", err)
}
crdClient, err := apiextensionsv1.NewForConfig(config)
if err != nil {
log.Fatalf("Failed to create apiextensions client: %s", err)
}
crds, err := crdClient.ApiextensionsV1().CustomResourceDefinitions().List(context.TODO(), metav1.ListOptions{})
if err != nil {
log.Fatalf("Failed to get CustomResourceDefinitions: %s", err)
}
totalLeafKeys := 0
for _, crd := range crds.Items {
crdName := crd.Name
fmt.Printf("CRD Name: %s\n", crdName)
if len(crd.Spec.Versions) > 0 {
for _, version := range crd.Spec.Versions {
if version.Served {
if version.Schema != nil && version.Schema.OpenAPIV3Schema != nil {
schemaSpec := version.Schema.OpenAPIV3Schema.Properties["spec"]
specKeys := []string{}
for k := range schemaSpec.Properties {
specKeys = append(specKeys, k)
}
sort.Strings(specKeys)
fmt.Printf(" Spec Keys: #%d\n", len(specKeys))
for _, key := range specKeys {
fmt.Printf(" - %s\n", key)
}
totalLeafKeys += len(specKeys)
schemaStatus := version.Schema.OpenAPIV3Schema.Properties["status"]
statusKeys := []string{}
if schemaStatus.Properties != nil {
for k := range schemaStatus.Properties {
statusKeys = append(statusKeys, k)
}
}
sort.Strings(statusKeys)
fmt.Printf(" Status Keys: #%d\n", len(statusKeys))
for _, key := range statusKeys {
fmt.Printf(" - %s\n", key)
}
totalLeafKeys += len(statusKeys)
}
}
}
}
fmt.Println()
}
fmt.Printf("Total number of leaf keys: %d\n", totalLeafKeys)
}
...
Total number of leaf keys: 1218
...
Total number of leaf keys: 1218
...
Total number of leaf keys: 1218
...
Total number of leaf keys: 1218
...
I wondered if it might be a version issue, so I changed the standalone code dependency from apiextensions-apiserver v0.30.1 to v0.25.2, which the plugin currently relies on, and experimented again, but I still got the same result..
Hmm, that's strange. It seems that we are consistently getting the missing keys (status.amis
, status.subnets
) every time from the underlying API we are using today.
CRD Name: ec2nodeclasses.karpenter.k8s.aws
Spec Keys: #13
- amiFamily
- amiSelectorTerms
- blockDeviceMappings
- context
- detailedMonitoring
- instanceProfile
- instanceStorePolicy
- metadataOptions
- role
- securityGroupSelectorTerms
- subnetSelectorTerms
- tags
- userData
Status Keys: #4
- amis
- instanceProfile
- securityGroups
- subnets
Could you please try querying the table ec2nodeclasses.karpenter.k8s.aws
using Steampipe (select * from <Table Name>
) again and see if you observe the same behavior (sometimes .status.amis is missing, other times .status.subnets is missing, and sometimes both are missing or both are present
)?
Note: Before running the query, give a few seconds to load the table (Steampipe takes a few seconds to load the configurations/dynamic tables). Follow these steps:
steampipe query
.select * from <Table Name>
.Could you please try querying the table ec2nodeclasses.karpenter.k8s.aws using Steampipe (
select * from <Table .Name>
) again and see if you observe the same behavior (sometimes .status.amis is missing, other times .status.subnets is missing, and sometimes both are missing or both are present)?
~/tmp/foo 12s
❯ sp query 13:55:11
Welcome to Steampipe v0.23.2
For more information, type .help
> select ordinal_position, column_name from information_schema.columns where table_schema='k8s_dev' and table_name='kubernetes_ec2nodeclass'
+------------------+-------------------------------+
| ordinal_position | column_name |
+------------------+-------------------------------+
| 1 | name |
| 2 | uid |
| 3 | kind |
| 4 | api_version |
| 5 | namespace |
| 6 | creation_timestamp |
| 7 | labels |
| 8 | context_name |
| 9 | source_type |
| 10 | annotations |
| 11 | ami_selector_terms |
| 12 | instance_profile |
| 13 | instance_store_policy |
| 14 | metadata_options |
| 15 | block_device_mappings |
| 16 | context |
| 17 | detailed_monitoring |
| 18 | ami_family |
| 19 | role |
| 20 | security_group_selector_terms |
| 21 | subnet_selector_terms |
| 22 | tags |
| 23 | user_data |
| 24 | subnets |
| 25 | amis |
| 26 | status_instance_profile |
| 27 | path |
| 28 | start_line |
| 29 | end_line |
| 30 | sp_connection_name |
| 31 | sp_ctx |
| 32 | _ctx |
+------------------+-------------------------------+
>
~/tmp/foo 14s
❯ sp query 13:55:36
Welcome to Steampipe v0.23.2
For more information, type .help
> select ordinal_position, column_name from information_schema.columns where table_schema='k8s_dev' and table_name='kubernetes_ec2nodeclass'
+------------------+-------------------------------+
| ordinal_position | column_name |
+------------------+-------------------------------+
| 1 | name |
| 2 | uid |
| 3 | kind |
| 4 | api_version |
| 5 | namespace |
| 6 | creation_timestamp |
| 7 | labels |
| 8 | context_name |
| 9 | source_type |
| 10 | annotations |
| 11 | ami_selector_terms |
| 12 | block_device_mappings |
| 13 | instance_profile |
| 14 | metadata_options |
| 15 | role |
| 16 | subnet_selector_terms |
| 17 | tags |
| 18 | ami_family |
| 19 | context |
| 20 | instance_store_policy |
| 21 | security_group_selector_terms |
| 22 | user_data |
| 23 | detailed_monitoring |
| 24 | amis |
| 25 | status_instance_profile |
| 26 | path |
| 27 | start_line |
| 28 | end_line |
| 29 | sp_connection_name |
| 30 | sp_ctx |
| 31 | _ctx |
+------------------+-------------------------------+
>
~/tmp/foo 11s
❯ sp query 13:55:48
Welcome to Steampipe v0.23.2
For more information, type .help
> select ordinal_position, column_name from information_schema.columns where table_schema='k8s_dev' and table_name='kubernetes_ec2nodeclass'
+------------------+-------------+
| ordinal_position | column_name |
+------------------+-------------+
+------------------+-------------+
> select ordinal_position, column_name from information_schema.columns where table_schema='k8s_dev' and table_name='kubernetes_ec2nodeclass'
+------------------+-------------+
| ordinal_position | column_name |
+------------------+-------------+
+------------------+-------------+
> select ordinal_position, column_name from information_schema.columns where table_schema='k8s_dev' and table_name='kubernetes_ec2nodeclass'
+------------------+-------------+
| ordinal_position | column_name |
+------------------+-------------+
+------------------+-------------+
>
~/tmp/foo 21s
❯ sp query 13:56:09
Welcome to Steampipe v0.23.2
For more information, type .help
> select ordinal_position, column_name from information_schema.columns where table_schema='k8s_dev' and table_name='kubernetes_ec2nodeclass'
+------------------+-------------------------------+
| ordinal_position | column_name |
+------------------+-------------------------------+
| 1 | name |
| 2 | uid |
| 3 | kind |
| 4 | api_version |
| 5 | namespace |
| 6 | creation_timestamp |
| 7 | labels |
| 8 | context_name |
| 9 | source_type |
| 10 | annotations |
| 11 | ami_family |
| 12 | instance_store_policy |
| 13 | security_group_selector_terms |
| 14 | instance_profile |
| 15 | metadata_options |
| 16 | role |
| 17 | ami_selector_terms |
| 18 | block_device_mappings |
| 19 | user_data |
| 20 | context |
| 21 | detailed_monitoring |
| 22 | subnet_selector_terms |
| 23 | tags |
| 24 | amis |
| 25 | status_instance_profile |
| 26 | path |
| 27 | start_line |
| 28 | end_line |
| 29 | sp_connection_name |
| 30 | sp_ctx |
| 31 | _ctx |
+------------------+-------------------------------+
>
>
~/tmp/foo 21s
❯ sp query 13:56:31
Welcome to Steampipe v0.23.2
For more information, type .help
> select ordinal_position, column_name from information_schema.columns where table_schema='k8s_dev' and table_name='kubernetes_ec2nodeclass'
+------------------+-------------------------------+
| ordinal_position | column_name |
+------------------+-------------------------------+
| 1 | name |
| 2 | uid |
| 3 | kind |
| 4 | api_version |
| 5 | namespace |
| 6 | creation_timestamp |
| 7 | labels |
| 8 | context_name |
| 9 | source_type |
| 10 | annotations |
| 11 | instance_profile |
| 12 | subnet_selector_terms |
| 13 | block_device_mappings |
| 14 | instance_store_policy |
| 15 | detailed_monitoring |
| 16 | context |
| 17 | metadata_options |
| 18 | tags |
| 19 | user_data |
| 20 | ami_family |
| 21 | role |
| 22 | security_group_selector_terms |
| 23 | ami_selector_terms |
| 24 | security_groups |
| 25 | subnets |
| 26 | amis |
| 27 | status_instance_profile |
| 28 | path |
| 29 | start_line |
| 30 | end_line |
| 31 | sp_connection_name |
| 32 | sp_ctx |
| 33 | _ctx |
+------------------+-------------------------------+
>
In Steampipe, the issue still recurs. Regarding the part "Before running the query, give a few seconds to load the table," the intermittent drop phenomenon was reproduced in various cases, whether I didn't wait at all, waited for 5 seconds, or waited for 1 minute before running the query.
For reference, if the cache is turned off, the ordinary position interestingly gets mixed up.
~/tmp/foo 12s
❯ export STEAMPIPE_CACHE=false 14:03:32
~/tmp/foo
❯ sp query 14:03:32
Welcome to Steampipe v0.23.2
For more information, type .help
> select ordinal_position, column_name from information_schema.columns where table_schema='k8s_dev' and table_name='kubernetes_ec2nodeclass'
+------------------+-------------------------------+
| ordinal_position | column_name |
+------------------+-------------------------------+
| 31 | _ctx |
| 6 | creation_timestamp |
| 7 | labels |
| 10 | annotations |
| 11 | tags |
| 12 | ami_selector_terms |
| 13 | security_group_selector_terms |
| 15 | subnet_selector_terms |
| 16 | detailed_monitoring |
| 17 | metadata_options |
| 20 | block_device_mappings |
| 24 | amis |
| 27 | start_line |
| 28 | end_line |
| 30 | sp_ctx |
| 25 | status_instance_profile |
| 26 | path |
| 2 | uid |
| 3 | kind |
| 4 | api_version |
| 5 | namespace |
| 18 | role |
| 19 | user_data |
| 8 | context_name |
| 9 | source_type |
| 29 | sp_connection_name |
| 21 | context |
| 22 | ami_family |
| 23 | instance_profile |
| 14 | instance_store_policy |
| 1 | name |
+------------------+-------------------------------+
>
~/tmp/foo 18s
❯ sp query 14:03:54
Welcome to Steampipe v0.23.2
For more information, type .help
> select ordinal_position, column_name from information_schema.columns where table_schema='k8s_dev' and table_name='kubernetes_ec2nodeclass'
+------------------+-------------------------------+
| ordinal_position | column_name |
+------------------+-------------------------------+
| 33 | _ctx |
| 6 | creation_timestamp |
| 7 | labels |
| 10 | annotations |
| 11 | ami_selector_terms |
| 15 | metadata_options |
| 17 | security_group_selector_terms |
| 18 | tags |
| 20 | detailed_monitoring |
| 22 | subnet_selector_terms |
| 23 | block_device_mappings |
| 24 | security_groups |
| 25 | subnets |
| 26 | amis |
| 29 | start_line |
| 30 | end_line |
| 32 | sp_ctx |
| 2 | uid |
| 3 | kind |
| 4 | api_version |
| 5 | namespace |
| 28 | path |
| 19 | ami_family |
| 8 | context_name |
| 9 | source_type |
| 31 | sp_connection_name |
| 21 | instance_store_policy |
| 12 | instance_profile |
| 13 | user_data |
| 14 | context |
| 1 | name |
| 16 | role |
| 27 | status_instance_profile |
+------------------+-------------------------------+
>
~/tmp/foo 11s
❯ sp query 14:04:06
Welcome to Steampipe v0.23.2
For more information, type .help
> select ordinal_position, column_name from information_schema.columns where table_schema='k8s_dev' and table_name='kubernetes_ec2nodeclass'
+------------------+-------------------------------+
| ordinal_position | column_name |
+------------------+-------------------------------+
| 31 | _ctx |
| 6 | creation_timestamp |
| 7 | labels |
| 10 | annotations |
| 12 | detailed_monitoring |
| 14 | security_group_selector_terms |
| 16 | ami_selector_terms |
| 18 | tags |
| 20 | block_device_mappings |
| 22 | metadata_options |
| 23 | subnet_selector_terms |
| 24 | amis |
| 27 | start_line |
| 28 | end_line |
| 30 | sp_ctx |
| 29 | sp_connection_name |
| 17 | role |
| 2 | uid |
| 3 | kind |
| 4 | api_version |
| 5 | namespace |
| 1 | name |
| 19 | user_data |
| 8 | context_name |
| 9 | source_type |
| 25 | status_instance_profile |
| 11 | context |
| 21 | instance_store_policy |
| 13 | instance_profile |
| 26 | path |
| 15 | ami_family |
+------------------+-------------------------------+
>
Thank you so much, @dongho-jung, for your cooperation.
Hmm, it is quite strange. I will investigate the plugin code further. There might be a gap in the table schema building process. I will reach out if I need anything else from you.
Thanks again for all your information and help!
Yes, I also appreciate you maintaining such awesome open-source project. I don't know which time zone you are in, but I hope you have a great rest of your day.
Hello @ParthaI. I had some time this morning to dig a bit deeper.
I confirmed that the issue was resolved by making the following modifications. Could you please review them?
https://github.com/turbot/steampipe-plugin-kubernetes/pull/229
Awesome, @dongho-jung! It's great to hear that the issue got resolved. I also had a doubt about this block of code, something was going wrong there.
I think the changes look good. I tested the following cases:
spec
properties.spec
and status
properties (different attribute names).spec
and status
properties (same attribute name).Thanks!
Sure, here's the sentence rewritten for better clarity and politeness:
is that chatGPT thing? XD (I also have GPT review my writing before I post it, haha)
Describe the bug Every time the Steampipe service restarts, the DDL of one of the CRDs, the kubernetes_ec2nodeclass table, changes. Not only does the order of the columns change, but some fields are also missing. From several experiments, sometimes .status.amis is missing, other times .status.subnets is missing, and sometimes both are missing or both are present.
Steampipe version (
steampipe -v
) Example: v0.23.2Plugin version (
steampipe plugin list
) Example: hub.steampipe.io/plugins/turbot/kubernetes@latest 0.28.0To reproduce Steps to reproduce the behavior (please include relevant code and/or commands).
Expected behavior Consistent DDL. While the order of columns is a minor issue, missing columns make it difficult to write reliable queries.
Additional context I wasn't sure if this was a bug or something else, so I initially posted it on Slack to get some info
You can check the actual DDL statements that changed with each attempt at the link above.