Closed Rajpoot2 closed 5 months ago
hmmm please post any maxwell logs
Maxwell does not leave any logs at that point. If I use the option "--ignore_producer_error=false" , it exits whenever kafka is down, And when I restart maxwell after kafka is up, it works correctly and starts from last processed position. But if maxwell keep running during kafka downtime and I don't use option "--ignore_producer_error=false" maxwell does not work correctly. I actually have another questions. 1 - Can I capture changes directly from maxwell into my custom script or another mysql database with changed schema? without using any message queue or file producer? If yes then how?
1st issue resolved . Please open again if there is any resolution for second question
Hello, I have configured maxwell to publish changes to kafka. If I stop my kafka container for some time and some changes happen during this time(I have tried 70 and 50 changes), When kafka comes back online, maxwell only publishes latest 35 changes to kafka. The older changes are lost. How to tackle this? According to documentation, it starts publishing from last processed binary log when kafka is up again. if that's the case, it must capture all 70 changes. One more thing, if I do less than 35 changes during downtime of kafka, it publishes all changes in correct sequence after recovery of kafka. But If I do more than 35 changes during downtime of kafka, maxwell publishes last 35 changes in random order in kafka
Following is my docker compose configuration of maxwell
maxwell: image: zendesk/maxwell:latest container_name: maxwell links: