Closed rahil-c closed 2 years ago
Hi @rahil-c, thank you for a detailed description!
After removing the managed = False
suggestion, can you try doing a migration to see if that works?
For example, in https://docs.gethue.com/administrator/administration/operations/#commands can you try migrate
or makemigrations
command?
Hi @Harshg999 , thanks for the quick response.
I tried both the following commands,
[$hadoop@ip-1XX-XX_XX] /usr/lib/hue/build/env/bin/hue migrate
WARNINGS:
?: (mysql.W002) MySQL Strict Mode is not set for database connection 'default'
HINT: MySQL's Strict Mode fixes many data integrity problems in MySQL, such as data truncation upon insertion, by escalating warnings into errors. It is strongly recommended you activate it. See: https://docs.djangoproject.com/en/1.11/ref/databases/#mysql-sql-mode
jobbrowser.DagDetails.dag_info: (fields.W342) Setting unique=True on a ForeignKey has the same effect as using a OneToOneField.
HINT: ForeignKey(unique=True) is usually better served by a OneToOneField.
jobbrowser.QueryDetails.hive_query: (fields.W342) Setting unique=True on a ForeignKey has the same effect as using a OneToOneField.
HINT: ForeignKey(unique=True) is usually better served by a OneToOneField.
Operations to perform:
Apply all migrations: admin, auth, axes, beeswax, contenttypes, desktop, jobsub, oozie, pig, sessions, sites, useradmin
Running migrations:
No migrations to apply
[hadoop@ip-1XX-XX_XX ~]$ /usr/lib/hue/build/env/bin/hue makemigrations
System check identified some issues:
WARNINGS:
jobbrowser.DagDetails.dag_info: (fields.W342) Setting unique=True on a ForeignKey has the same effect as using a OneToOneField.
HINT: ForeignKey(unique=True) is usually better served by a OneToOneField.
jobbrowser.QueryDetails.hive_query: (fields.W342) Setting unique=True on a ForeignKey has the same effect as using a OneToOneField.
HINT: ForeignKey(unique=True) is usually better served by a OneToOneField.
No changes detected
It seems that some other tables are added but not the ones mentioned in the original commit such as hive_query
etc. The dumpdata
command still experiences the same issue regardless of these commands being ran.
Hi @rahil-c, looks like this code is of a feature which was not fully implemented. You can try reverting the mentioned commit https://github.com/cloudera/hue/commit/f999ac696c88c6c19060f37afaa7c019e28c8ba5 to see if database dump is working again.
Hope it helps!
Hi @Harshg999 so I have tried commenting out those classes introduced by above commit f999ac6
,while the hue service is running. Once commented out when I run the database dump command I'm able to successfully run the command.
im just curious but would commenting/reverting this commit have any other impact elsewhere in hue?
This should not have any impact elsewhere in Hue since the feature was not fully completed.
It was intended to be used in different database under [[query_database]]
in the config. The feature flag is enable_hive_query_browser
under [jobbrowser]
which is by default false. So, till the time there is no interaction with above things, commenting/reverting is good to go.
Is the issue already present in https://github.com/cloudera/hue/issues or discussed in the forum https://discourse.gethue.com?
The issue is discussed in these forums
Describe the bug:
When running hue version 4.8.0 on AWS EMR 5.32.0, running the follow command
/usr/lib/hue/build/env/bin/hue dumpdata > ./hue-mysql.json
it returns the following exception
Cause of issue
hive_query
table as well all other tables/models from the original commit are not added within the local mysql database's alias forhue
.Investigation
models.py
script can be manually modified on the cluster at this path/usr/lib/hue/apps/jobbrowser/src/jobbrowser/models.py
however it seems that following the above suggestions such as Removemanaged = False
still does not create thehive_query
or other tables.Hue version or source? (e.g. open source 4.5, CDH 5.16, CDP 1.0...). System info (e.g. OS, Browser...).
open source hue 4.8.0+
Special Note
Can hue community please prioritize this fix, since it seems to be affecting several customers on AWS EMR.