spaghettidba / WorkloadTools

A collection of tools to collect, analyze and replay SQL Server workloads, on premises and in the cloud
MIT License
232 stars 52 forks source link

Trying to convert a 10GB trace file times out #97

Closed aallee1 closed 3 years ago

aallee1 commented 3 years ago

Hi I have captured a trace on the production server and trying to convert the trace file now using ConvertWorkload.exe but the process times out after 350ish seconds, is there a switch I can use to not let it timeout?

My source and target both servers are SQL Server 2019 Enterprise versions but the only difference is, new server is a part of AG. So trying to check the difference between the standalone and AG set in Azure.

Any help guidance is much appreciated.

Smaller traces files it worked like a charm. Absolutely loving it.

Also another thing to mention, I have spun up a separate server for WorkLoadTools. WorkLoadTools app is installed on that server. This server has 16 cores and 64GB ram.

spaghettidba commented 3 years ago

Hi, you're not the first to report this behavior. How many trace files do you have? How big are they? Have you considered capturing to the WorkloadTools native format (.sqlite) directly?

aallee1 commented 3 years ago

There is only 1 file and it is nearly 10GB. Can I capture file using native format directly, can I just save the work load and replay at a later stage?

spaghettidba commented 3 years ago

Yes of course! You can use a configuration like this:

{
    "Controller": {

        // This listener connects to the source instance
        // using Extended Events
        "Listener":
        {
            "__type": "ExtendedEventsWorkloadListener",
            "ConnectionInfo":
            {
                "ServerName": "SourceInstance"
            },
            "DatabaseFilter": "YourDatabase"
        },

        "Consumers":
        [
            // This consumer writes the workload to a file
            {
                "__type": "WorkloadFileWriterConsumer",
                "OutputFile": "C:\\temp\\SqlWorkload.sqlite"
            }
        ]
    }
}

Later for the replay you can use a configuration file like this:

{
    "Controller": {

        "Listener":
        {
            "__type": "FileWorkloadListener",
            "Source": "C:\\temp\\SqlWorkload.sqlite",
            // in this case you want to simulate the original query rate
            "SynchronizationMode": "true"
        },

        "Consumers":
        [
            {
                "__type": "ReplayConsumer",
                "ConnectionInfo": 
                {
                    "ServerName": "TargetInstance",
                    "DatabaseName": "YourDatabase"
                }
            }
        ]
    }
}

This article should get you started: https://spaghettidba.com/2019/06/20/workload-replay-with-workloadtools/

aallee1 commented 3 years ago

Thank you very much for your help with this. I will use the native listener to capture the trace. Thank you for writing such a cool application :)

spaghettidba commented 3 years ago

Thank you for reporting your issues! I will keep this one open and try to replicate the 10 GB trace timeout. Cheers!

spaghettidba commented 3 years ago

Can't reproduce