Open kekcjkee opened 3 years ago
My config file:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.7.2" />
</startup>
<system.diagnostics>
<switches>
<!-- TraceLevel Enum
0 - Off - Output no tracing and debugging messages.
1 - Error - Output error-handling messages.
2 - Warning - Output warnings and error-handling messages.
3 - Info - Output informational messages, warnings, and error-handling messages.
4 - Verbose - Output all debugging and tracing messages. -->
<add name="GeneralTraceSwitch" value="Verbose" />
</switches>
</system.diagnostics>
<appSettings>
<!-- Central repository connection details -->
<add key="CentralRepositorySqlInstance" value="#######" />
<add key="CentralRepositorySqlDatabase" value="SQLWATCH" />
<!-- It is recommended to use Windows Authentication.
Even withouth an AD, you can create a new Windows Account and
SQL proxy, grant it access to the Central Repository database
and Run SQL Agent CMD step as that proxy -->
<add key="CentralRepositorySqlUser" value="" />
<!-- Sql Password Encrypted with either Machine or User key-->
<add key="CentralRepositorySqlSecret" value="" />
<!-- Streaming data from the DataReader into BulkCopy will genearally be much faster
and will use less memory but may use more CPU -->
<add key="SqlBulkCopy.EnableStreaming" value="true"/>
<!-- SqlBulkCopy.BatchSize Property -->
<add key="SqlBulkCopy.BatchSize" value="4000"/>
<!-- Query Execution Timeout on the remote instance. Normally the data import
should not take more than few seconds but if you decide to do a
full load, the default timeout may not be long enough -->
<add key="SqlBulkCopy.BulkCopyTimeout" value="300"/>
<!-- You can split the import by [environment]
as in in [dbo].[sqlwatch_config_sql_instance] -->
<add key="EnvironmentToProcess" value="ST3" />
<!-- To improve performance, only new records are imported from the remote instance
However, if you are getting key violation errors, the data may have gotten out of sync,
in which case, a full load may be required.
This will apply to ALL servers in the batch. If you only want to fully load single instance,
You can use the EnvironmentToProcess parameter to isolate it. -->
<add key="FullLoad" value="false"/>
<!-- Log file parameters -->
<add key="LogFile" value=".\LOGS\SqlWatchImport.log" />
<!-- Max size of a single log file -->
<add key="MaxLogSizeMB" value="10" />
<!-- Max number of log files to keep -->
<add key="MaxLogFiles" value="10" />
<!-- Whether to output to log file or not. If set to false,
nothing will be output to the log file -->
<add key="PrintToLogFile" value="true"/>
<!-- The application will write messages to the Log defined in the Log config
Optionally, you can enable writing to the Console.
If you're running it via SQL Agent job, the console output will appear
in the agent job history which may be handy but not always desired due
to the potential size of the log output -->
<add key="PrintToConsole" value="true"/>
<!-- You can control the number of threads here,
Normally .NET will do a good job managing threads
based on the number of cores for CPU and IO bound workloads.
However, in this case, the application is passing data from one SQL server
to another with very minimum procesing and thus minimal CPU and IO usage.
We may find that incrasing number of threads may improve performance.
If you do, watch the CPU and Memory usage.
Set 0 for automatic management
Set -1 to let application manage the number of threads based on the
number of remote SqlInstances and Tables.
Depending on your setup this may be much faster or much slower. Please test. -->
<add key="MinThreads" value="-1" />
<add key="MaxThreads" value="0" />
<!-- Similar to the number of threads, this controls
The number of concurrenct connection.
Set 0 for automatic management. -->
<add key="MinPoolSize" value="0" />
<!-- Normally the Connection Pool is limited to 100
concurrent connections. This will probably be enough
for the standard number of threads, however, if you
do increase the number of threads or the number of connection
you may also have to increase the number of available connections -->
<add key="MaxPoolSize" value="0" />
<!-- If we encounter an error, we can dump the data in memory into a physical table
to help debugging. This is handy if there are key violations or duplicates -->
<add key="DumpDataOnError" value="true"/>
<add key="ClientSettingsProvider.ServiceUri" value="" />
</appSettings>
<system.web>
<!-- It is adivable to use Windows Authentication.
However, if you really want to use SQL authentication there are some
important things to remember.
The credentials will be stored in the SQLWATCH database, in
the [dbo].[sqlwatch_config_sql_instance] table.
The password will be encryted using the Windows Machine Key,
or a key specified below.
If you encrypt using the Machine key the application will only
work on the machine where the encrypion happened.
If you encrypt using the custom Key, the application will work
on any machine but the Key will have to be saved in this config file.
I would suggest to use the Machine key if you really have to.
Note that anyone with access to the Machine can technially get the Machine key
and if they also have access to the SQLWATCH databae, they may be able to decrypt
SQL passwords stored in the table. If you are going down this route, create a
SQL user that has READONLY access to the remote SQLWATCH databae and nothing else,
so if someone decrypts the passwords, they are not going to gain access to anything
on the remote Sql Instance.
If you want to use custom Machine key, you MUST generate a new key,
uncommited the machineKey section below and paste it there.
You can generate ASP keys here https://www.allkeysgenerator.com/Random/ASP-Net-MachineKey-Generator.aspx
If you change the key after you have encrypted passwords,
or if you use Machine key and want to run it on a different machine,
you are going to have to recreate the user secret i.e. save a new, encrypted password.
You do not have to change the Sql Password, just re-save it using the new encryption keys.
You can do so by running:
SqlWatchImport.exe -\-update -s SQLSERVER1 -u SQLUSER -p PASSWORD
-->
<machineKey
validationKey="BF889BA03603B00A7F0EB21819A9206966C5CB591D4C0EC06833405F33A4A10645DF760AB3F9F12618B00E79658436723F8C255E5A7B06860D421B05BC6D7DA8"
decryptionKey="C9D3C356175B615AA69B821EC469A3E08423B47D1BBE21ED5A69688677066161"
validation="SHA1"
decryption="AES"/>
<membership defaultProvider="ClientAuthenticationMembershipProvider">
<providers>
<add name="ClientAuthenticationMembershipProvider" type="System.Web.ClientServices.Providers.ClientFormsAuthenticationMembershipProvider, System.Web.Extensions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" serviceUri="" />
</providers>
</membership>
<roleManager defaultProvider="ClientRoleProvider" enabled="true">
<providers>
<add name="ClientRoleProvider" type="System.Web.ClientServices.Providers.ClientRoleProvider, System.Web.Extensions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" serviceUri="" cacheTimeout="86400" />
</providers>
</roleManager>
</system.web>
</configuration>
My log. SqlWatchImport.log
Upvote for this issue. Encountered it whilst investigating #423
There is a variable declaration, but it is not used. Possibly it was a feature that was not finished.
Did you check DOCS to make sure there is no workaround? Yes.
Describe the bug SqlWatchImport ignores config value "EnvironmentToProcess" and syncs data for all remote instances anyway.
To Reproduce Steps to reproduce the behavior:
Expected behavior I want have able to split environment for sync.
Windows Server (please complete the following information):
SQL Server (please complete the following information):
SQLWATCH version (from DACPAC or from sysinstances)