Closed sonnguyen-dba closed 3 years ago
================================================================================ VM Arguments: jvm_args: -Xms20G -Xmx200G -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:MaxInlineLevel=15 -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/data/kafka/bin/../logs -Dlog4j.configuration=file:/data/kafka/bin/../etc/kafka/connect-log4j.properties java_command: org.apache.kafka.connect.cli.ConnectStandalone /data/kafka/config/connect-standalone.properties /data/kafka/config/erp/oracdc-outputvoucher.properties java_class_path (initial): /data/oracdc/target/lib/HikariCP-3.4.1.jar:/data/oracdc/target/lib/affinity-3.2.2.jar:/data/oracdc/target/lib/annotations-12.0.jar:/data/oracdc/target/lib/chronicle-bytes-2.17.49.jar:/data/oracdc/target/lib/chronicle-core-2.17.35.jar:/data/oracdc/target/lib/chronicle-queue-5.17.43.jar:/data/oracdc/target/lib/chronicle-threads-2.17.27.jar:/data/oracdc/target/lib/chronicle-wire-2.17.71.jar:/data/oracdc/target/lib/commons-cli-1.4.jar:/data/oracdc/target/lib/commons-lang3-3.8.1.jar:/data/oracdc/target/lib/commons-math3-3.6.1.jar:/data/oracdc/target/lib/compiler-2.3.4.jar:/data/oracdc/target/lib/hamcrest-core-1.3.jar:/data/oracdc/target/lib/jackson-annotations-2.10.0.jar:/data/oracdc/target/lib/jackson-core-2.10.0.jar:/data/oracdc/target/lib/jackson-databind-2.10.0.jar:/data/oracdc/target/lib/jna-4.2.1.jar:/data/oracdc/target/lib/jna-platform-4.2.1.jar:/data/oracdc/target/lib/ojdbc8-19.7.0.0.jar:/data/oracdc/target/lib/ons-19.7.0.0.jar:/data/oracdc/target/lib/oraclepki-19.7.0.0.jar:/data/oracdc/target/lib/osdt_cert-19.7.0.0.jar:/data/oracdc/target/lib/osdt_core-19.7.0.0.jar:/data/oracdc/target/lib/simplefan-19.7.0.0.jar:/data/oracdc/target/lib/ucp-19.7.0.0.jar:/data/oracdc/target/lib/postgresql-42.2.14.jar:/data/oracdc/target/oracdc-kafka-0.9.7.1.jar:/data/kafka/share/java/confluent-security/connect/kotlin-stdlib-1.3.71.jar:/data/kafka/share/java/confluent-security/connect/zookeeper-jute-3.5.8.jar:/data/kafka/share/java/confluent-security/connect/netty-resolver-4.1.48.Final.jar:/data/kafka/share/java/confluent-security/connect/metrics-core-2.2.0.jar:/data/kafka/share/java/confluent-security/connect/org.everit.json.schema-1.12.1.jar:/data/kafka/share/java/confluent-security/connect/jackson-module-parameter-names-2.10.2.jar:/data/kafka/share/java/confluent-security/connect/common-utils-5.5.1.jar:/data/kafka/share/java/confluent-security/connect/jackson-module-jaxb-annotations-2.10.2.jar:/data/kafka/share/java/confluent-security/connect/jetty-ht Launcher Type: SUN_STANDARD
Environment Variables: JAVA_HOME=/opt/jdk1.8.0_261 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/bin:/opt/jdk1.8.0_261/bin:/opt/apache-maven-3.6.3/bin SHELL=/bin/bash
/proc/meminfo: MemTotal: 263973948 kB MemFree: 91352568 kB MemAvailable: 231530568 kB Buffers: 536268 kB Cached: 140247164 kB SwapCached: 8 kB Active: 30297872 kB Inactive: 139813144 kB Active(anon): 29292096 kB Inactive(anon): 44572 kB Active(file): 1005776 kB Inactive(file): 139768572 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 1048572 kB SwapFree: 1047284 kB Dirty: 61324 kB Writeback: 0 kB AnonPages: 29327964 kB Mapped: 735208 kB Shmem: 8776 kB Slab: 784820 kB SReclaimable: 709676 kB SUnreclaim: 75144 kB KernelStack: 5120 kB PageTables: 218336 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 133035544 kB Committed_AS: 161785020 kB VmallocTotal: 34359738367 kB VmallocUsed: 436804 kB VmallocChunk: 34359289852 kB Percpu: 2816 kB HardwareCorrupted: 0 kB AnonHugePages: 29069312 kB CmaTotal: 0 kB CmaFree: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 77672 kB DirectMap2M: 268357632 kB
container (cgroup) information: container_type: cgroupv1 cpu_cpuset_cpus: 0-15 cpu_memory_nodes: 0 active_processor_count: 16 cpu_quota: -1 cpu_period: 100000 cpu_shares: -1 memory_limit_in_bytes: -1 memory_and_swap_limit_in_bytes: -1 memory_soft_limit_in_bytes: -1 memory_usage_in_bytes: 174194388992 memory_max_usage_in_bytes: 0
CPU:total 16 (initial active 16) (16 cores per cpu, 1 threads per core) family 6 model 6 stepping 3, cmov, cx8, fxsr, mmx, sse, sse2, sse3, tsc
================================================================================
Hi Son,
Could you please update issue with output of cat /proc/sys/vm/overcommit_memory cat /proc/sys/vm/overcommit_ratio
Regards, Aleksei
Hi Aleksej, cat /proc/sys/vm/overcommit_memory 0 cat /proc/sys/vm/overcommit_ratio 50
Regards, Son
Thanks!
Please provide value of KAFKA_OPTS environment variable for connector
Regards, Aleksei
Hi Aleksej, export KAFKA_HEAP_OPTS="-Xms30G -Xmx200G -XX:+UnlockExperimentalVMOptions -XX:G1NewSizePercent=70 -XX:G1MaxNewSizePercent=80 -XX:G1HeapRegionSize=80M -XX:G1ReservePercent=30 -XX:G1HeapWastePercent=5 -XX:G1MixedGCCountTarget=8 -XX:+AlwaysPreTouch -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+ParallelRefProcEnabled -XX:MaxGCPauseMillis=20 -XX:G1MixedGCLiveThresholdPercent=80 -XX:G1RSetUpdatingPauseTimePercent=5 -XX:SurvivorRatio=32 -XX:MaxTenuringThreshold=1 -XX:InitiatingHeapOccupancyPercent=15 -XX:+PerfDisableSharedMem -XX:ParallelGCThreads=10 -XX:CICompilerCount=10"
Regards, Son
Please set
KAFKA_HEAP_OPTS="-Xms32G -Xmx32G -XX:MaxDirectMemorySize=128G"
export KAFKA_HEAP_OPTS
remove transaction files and oracdc.state and retest issue
Regards, Aleksei
Thanks Aleksej, I'll trying... Regards, Son
Hi Aleksej,
[2020-11-10 14:34:05,898] WARN Took 844228 to add mapping for /data/oracdc/trans/0D570012001F27A5.5260993472897430974/metadata.cq4t (net.openhft.chronicle.bytes.MappedFile:42) Exception in thread "OraCdcLogMinerWorkerThread-18221298454585" java.nio.BufferOverflowException at net.openhft.chronicle.bytes.MappedBytes.acquireNextByteStore0(MappedBytes.java:379) at net.openhft.chronicle.bytes.MappedBytes.writeCheckOffset(MappedBytes.java:342) at net.openhft.chronicle.bytes.AbstractBytes.compareAndSwapInt(AbstractBytes.java:191) at net.openhft.chronicle.wire.AbstractWire.writeFirstHeader(AbstractWire.java:582) at net.openhft.chronicle.queue.impl.single.SingleChronicleQueue$StoreSupplier.acquire(SingleChronicleQueue.java:837) at net.openhft.chronicle.queue.impl.WireStorePool.acquire(WireStorePool.java:97) at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.setCycle2(SingleChronicleQueueExcerpts.java:291) at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.setWireIfNull(SingleChronicleQueueExcerpts.java:417) at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.writingDocument(SingleChronicleQueueExcerpts.java:386) at net.openhft.chronicle.wire.MarshallableOut.writeDocument(MarshallableOut.java:94) at eu.solutions.a2.cdc.oracle.OraCdcTransaction.addStatement(OraCdcTransaction.java:223) at eu.solutions.a2.cdc.oracle.OraCdcLogMinerWorkerThread.run(OraCdcLogMinerWorkerThread.java:514) Caused by: java.io.IOException: Map failed at net.openhft.chronicle.core.OS.lambda$null$1(OS.java:365) at net.openhft.chronicle.core.OS.invokeFileChannelMap0(OS.java:344) at net.openhft.chronicle.core.OS.lambda$map0$2(OS.java:364) at net.openhft.chronicle.core.OS.invokeFileChannelMap0(OS.java:344) at net.openhft.chronicle.core.OS.map0(OS.java:355) at net.openhft.chronicle.core.OS.map(OS.java:333) at net.openhft.chronicle.bytes.MappedFile.acquireByteStore(MappedFile.java:348) at net.openhft.chronicle.bytes.MappedFile.acquireByteStore(MappedFile.java:296) at net.openhft.chronicle.bytes.MappedBytes.acquireNextByteStore0(MappedBytes.java:374) ... 11 more Caused by: java.lang.OutOfMemoryError: Map failed at sun.nio.ch.FileChannelImpl.map0(Native Method) at net.openhft.chronicle.core.OS.invokeFileChannelMap0(OS.java:339) ... 18 more Java HotSpot(TM) 64-Bit Server VM warning: Attempt to deallocate stack guard pages failed. [2020-11-10 14:34:09,462] INFO WorkerSourceTask{id=oracdc-oracle-ov-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:426) [2020-11-10 14:34:09,462] INFO WorkerSourceTask{id=oracdc-oracle-ov-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:443) [2020-11-10 14:34:19,462] INFO WorkerSourceTask{id=oracdc-oracle-ov-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:426)
Regards, Son
Hi Son, Could you please run the following queries:
select count(*) from ERP.PM_OUTPUTVOUCHER;
select NUM_ROWS, BLOCKS, AVG_ROW_LEN
from DBA_TABLES
where OWNER='ERP' and TABLE_NAME='PM_OUTPUTVOUCHER';
and update this issue with query output
Regards, Aleksei
Hi Aleksej,
540785564
select NUM_ROWS, BLOCKS, AVG_ROW_LEN from DBA_TABLES where OWNER='ERP' and TABLE_NAME='PM_OUTPUTVOUCHER';
NUM_ROWS BLOCKS AVG_ROW_LEN
448604746 28523782 331
Regards, Son
Hi Son,
This is definitely not software or software configuration issue.
Sending more than 150GiB table through Kafka is not good solution. As we discussed before in e-mail loop the right way is: if target is Oracle then - expdp with FLASHBACK_SCN, then impdp and start oracdc from this SCN using a2.first.change parameter with a2.initial.load=IGNORE If you need assistance with different targets (HDFS/PostgreSQL/S3/Teradata/etc) - we're ready for consulting engagement for performing data migration.
If you do not like to use data migration best practices and belief in a miracle:
Regards, Aleksei
Thanks Aleksej, Maybe I'll trying with JDK11. I need test initload to Kafka.
Regards, Son
Hi Son,
oracdc initial load is not designed to handle tables of such size as your ERP.PM_OUTPUTVOUCHER with half billion rows and more than 150GiB size
Regards, Aleksei
Thanks Aleksej, Yes, I see!
Regards, Son
Hi Aleksej, Maybe due to the Chronicle-Queue bug? https://github.com/OpenHFT/Chronicle-Queue/issues/751
Regards, Son
Hi Son,
This is different issue. You need to try with JDK11 without setting KAFKA_HEAP_OPTS/KAFKA_OPTS environment variables. It is possible to enhance initial load process in oracdc but this is out of our priority. Programming help or financing required.
Regards, Aleksei
Hi Aleksej, I was tried "JDK11 without setting KAFKA_HEAP_OPTS/KAFKA_OPTS environment variables" but the same. Now I'm trying with decrease batch.size=200000
Regards, Son
Hi Aleksej, [2020-11-10 20:54:49,189] INFO Kafka Connect standalone worker initializing ... (org.apache.kafka.connect.cli.ConnectStandalone:69) [2020-11-10 20:54:49,201] INFO WorkerInfo values: jvm.args = -Xmx256M, -XX:+UseG1GC, -XX:MaxGCPauseMillis=20, -XX:InitiatingHeapOccupancyPercent=35, -XX:+ExplicitGCInvokesConcurrent, -XX:MaxInlineLevel=15, -Djava.awt.headless=true, -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Dkafka.logs.dir=/data/kafka/bin/../logs, -Dlog4j.configuration=file:/data/kafka/bin/../etc/kafka/connect-log4j.properties jvm.spec = Oracle Corporation, Java HotSpot(TM) 64-Bit Server VM, 11.0.8, 11.0.8+10-LTS
[2020-11-10 21:10:08,609] INFO WorkerSourceTask{id=oracdc-oracle-ov-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:443) Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "SourceTaskOffsetCommitter-1" Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "oracle.ucp.actors.InterruptableActor-control" Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "pool-4-thread-1" Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "OraCdcLogMinerWorkerThread-4181927918351" Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "main/queue-thread-local-cleaner-daemon" Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "main/disk-space-checker" Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "pool-1-thread-1" ~
Regards, Son
Mea culpa!
kafka-run-class.sh: KAFKA_HEAP_OPTS="-Xmx256M"
Please set
KAFKA_HEAP_OPTS="-Xms32G -Xmx32G -XX:MaxDirectMemorySize=256G"
export KAFKA_HEAP_OPTS
and try again
Regards, Aleksei
Ok, Thanks Aleksej. I'm trying again... Regards, Son
Hi Son,
Could you please provide output of following commands:
cat /proc/sys/vm/max_map_count
ulimit -a
Thanks.
Hi Aleksej,
[root@oracdc46 ~]# cat /proc/sys/vm/max_map_count 65530 [root@oracdc46 ~]# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 1031062 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1048576 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 32768 cpu time (seconds, -t) unlimited max user processes (-u) 6815744 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
Regards, Son
Hi Aleksej,
jvm.args = -Xms32G, -Xmx32G, -XX:MaxDirectMemorySize=200G, -XX:+UseG1GC, -XX:MaxGCPauseMillis=20, -XX:InitiatingHeapOccupancyPercent=35, -XX:+ExplicitGCInvokesConcurrent, -XX:MaxInlineLevel=15, -Djava.awt.headless=true, -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Dkafka.logs.dir=/data/kafka/bin/../logs, -Dlog4j.configuration=file:/data/kafka/bin/../etc/kafka/connect-log4j.properties jvm.spec = Oracle Corporation, Java HotSpot(TM) 64-Bit Server VM, 11.0.8, 11.0.8+10-LTS
[2020-11-11 09:37:59,684] INFO WorkerSourceTask{id=oracdc-oracle-ov-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:443) [1554.859s][warning][os,thread] Failed to start thread - pthread_create failed (EAGAIN) for attributes: stacksize: 1024k, guardsize: 4k, detached. [1554.859s][warning][gc ] Failed to create refinement thread 4, no more OS threads Exception in thread "OraCdcLogMinerWorkerThread-48423515636963" java.nio.BufferOverflowException at net.openhft.chronicle.bytes.MappedBytes.acquireNextByteStore0(MappedBytes.java:379) at net.openhft.chronicle.bytes.MappedBytes.writeCheckOffset(MappedBytes.java:342) at net.openhft.chronicle.bytes.AbstractBytes.compareAndSwapInt(AbstractBytes.java:191) at net.openhft.chronicle.wire.AbstractWire.writeFirstHeader(AbstractWire.java:582) at net.openhft.chronicle.queue.impl.single.SingleChronicleQueue$StoreSupplier.acquire(SingleChronicleQueue.java:837) at net.openhft.chronicle.queue.impl.WireStorePool.acquire(WireStorePool.java:97) at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.setCycle2(SingleChronicleQueueExcerpts.java:291) at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.setWireIfNull(SingleChronicleQueueExcerpts.java:417) at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.writingDocument(SingleChronicleQueueExcerpts.java:386) at net.openhft.chronicle.wire.MarshallableOut.writeDocument(MarshallableOut.java:94) at eu.solutions.a2.cdc.oracle.OraCdcTransaction.addStatement(OraCdcTransaction.java:223) at eu.solutions.a2.cdc.oracle.OraCdcLogMinerWorkerThread.run(OraCdcLogMinerWorkerThread.java:517) Caused by: java.io.IOException: Map failed at net.openhft.chronicle.core.OS.lambda$null$1(OS.java:365) at net.openhft.chronicle.core.OS.invokeFileChannelMap0(OS.java:344) at net.openhft.chronicle.core.OS.lambda$map0$2(OS.java:364) at net.openhft.chronicle.core.OS.invokeFileChannelMap0(OS.java:344) at net.openhft.chronicle.core.OS.map0(OS.java:355) at net.openhft.chronicle.core.OS.map(OS.java:333) at net.openhft.chronicle.bytes.MappedFile.acquireByteStore(MappedFile.java:348) at net.openhft.chronicle.bytes.MappedFile.acquireByteStore(MappedFile.java:296) at net.openhft.chronicle.bytes.MappedBytes.acquireNextByteStore0(MappedBytes.java:374) ... 11 more Caused by: java.lang.OutOfMemoryError: Map failed at java.base/sun.nio.ch.FileChannelImpl.map0(Native Method) at net.openhft.chronicle.core.OS.invokeFileChannelMap0(OS.java:339) ... 18 more [1555.639s][warning][os,thread] Attempt to deallocate stack guard pages failed (0x00007f41fd8b9000-0x00007f41fd8bd000). [2020-11-11 09:38:09,684] INFO WorkerSourceTask{id=oracdc-oracle-ov-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:426)
[2020-11-11 10:03:49,790] INFO WorkerSourceTask{id=oracdc-oracle-ov-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:443) Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f4b40ae0000, 65536, 1) failed; error='Not enough space' (errno=12) #
#
Regards, Son
Hi Son,
Please upload /data/kafka/hs_err_pid13075.log
Regards, Aleksei
Hi Aleksej, replay_pid13075.log hs_err_pid13075.log
Regards, Son
Hi Son,
1) Add the following lines to /etc/sysctl.conf
### 1 per 128KiB of RAM
vm.max_map_count=2097152
2) Reload the config as root:
sysctl -p
3) Check the new value:
cat /proc/sys/vm/max_map_count
4) Set KAFKA_HEAP_OPTS to
KAFKA_HEAP_OPTS="-Xms16G -Xmx16G -XX:MaxDirectMemorySize=224G"
export KAFKA_HEAP_OPTS
5) Retest issue
Regards, Aleksei
Thanks Aleksej, I'm testing with KAFKA_HEAP_OPTS in kafka_run_class "-Xms32G -Xmx32G -XX:MaxDirectMemorySize=160G -XX:+ParallelRefProcEnabled -XX:MaxGCPauseMillis=20 -XX:+UnlockExperimentalVMOptions -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:G1NewSizePercent=30 -XX:G1MaxNewSizePercent=40 -XX:G1HeapRegionSize=8M -XX:G1ReservePercent=20 -XX:G1HeapWastePercent=5 -XX:G1MixedGCCountTarget=8 -XX:InitiatingHeapOccupancyPercent=15 -XX:G1MixedGCLiveThresholdPercent=90 -XX:G1RSetUpdatingPauseTimePercent=5 -XX:SurvivorRatio=32 -XX:MaxTenuringThreshold=1 -XX:+PerfDisableSharedMem -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:+ScavengeBeforeFullGC -XX:+HeapDumpOnOutOfMemoryError"
I'm waiting result Regards, Son
Hi Son,
You need to decrease heap as I write! Excuse me but why I'm writing if you ignore??? If you ignore suggestion - please close this issue.
Regards, Aleksei
Thanks Aleksej, Now I try your parameters. my options issued. Regards, Son
Hi Son,
- Add the following lines to /etc/sysctl.conf
### 1 per 128KiB of RAM vm.max_map_count=2097152
- Reload the config as root:
sysctl -p
- Check the new value:
cat /proc/sys/vm/max_map_count
- Set KAFKA_HEAP_OPTS to
KAFKA_HEAP_OPTS="-Xms16G -Xmx16G -XX:MaxDirectMemorySize=224G" export KAFKA_HEAP_OPTS
- Retest issue
Regards, Aleksei
Hi Aleksej, Thank you very much for your helping! I was set your parameter working good.
Regards, Son
Morning Aleksej,
Have nice a day!
Can you help me?
I have issue with memory
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f2c1f6bd000, 262144, 0) failed; error='Cannot allocate memory' (errno=12) #
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 262144 bytes for committing reserved memory.
An error report file with more information is saved as:
/data/kafka/bin/hs_err_pid9147.log
#
Compiler replay data is saved as:
/data/kafka/bin/replay_pid9147.log
hs_err_pid9147.log
replay_pid9147.log
Regards, Son