Closed inhoYoo closed 1 month ago
It looks like your database was created with character set US7ASCII which will be incapable of storing Korean characters. The best thing to do would be to convert the database to character set AL32UTF8 which is a universal character set (use the command alter database character set AL32UTF8
when running in restricted mode). At that point you will be able to store Korean characters successfully. If you are unable to change the character set you will have to use NCHAR and NVARCHAR2 columns, but that is not recommended.
It looks like your database was created with character set US7ASCII which will be incapable of storing Korean characters. The best thing to do would be to convert the database to character set AL32UTF8 which is a universal character set (use the command
alter database character set AL32UTF8
when running in restricted mode). At that point you will be able to store Korean characters successfully. If you are unable to change the character set you will have to use NCHAR and NVARCHAR2 columns, but that is not recommended.
Is there a way to view the data normally by encoding it using oracledb in a Python environment?
Check the documentation Fetching Raw Data and Querying Corrupt Data.
Hello.
Thank you for your time.
Our DBMS was built with oracle 19c, and the example picture shows the parameters related to NLS_LANGUAGE.
You cannot change that option. When using an editor called SQL Tools, Korean is displayed well.
The problem is that I load data using cx_Oracle, but the Korean text keeps appearing broken.
I installed and specified the latest version of instant client in the init_oracle_client path, and tried both encoding and nencoding, utf-8 and euc-kr, in the connect_args option with sqlalchemy, but in the end, the Korean text appears broken.
I tried things like os.environ['NLS_LANG'] = 'KOREAN_KOREA.KO16MSWIN949' in a Python environment, but in the end it didn't show up.
How can we solve this situation? Please help me. please.
thank you
I wish you good luck always.