dataplat / dbatools

🚀 SQL Server automation and instance migrations have never been safer, faster or freer
https://dbatools.io
MIT License
2.46k stars 799 forks source link

Set-/Test-DbaDbCompression aborting on SqlServer 2014 #3698

Closed road42 closed 6 years ago

road42 commented 6 years ago

Steps to Reproduce

<#
 Set-DbaDbCompression -SqlInstance dbserver -Database database -MaxRunTime 60 -PercentCompress
ion 25

Test-DbaDbCompression -SqlInstance dbserver -Database database
#>

Expected Behavior

Should compress or test the database.

Actual Behavior

The commands try to select

select object_id from sys.columns where encryption_type IS NOT NULL

See commit 480d268, but the column only exists in SQL Server 2016++

<# 
$error[0] | select *

writeErrorStream      : True
PSMessageDetails      :
Exception             : System.Exception: Exception calling "Query" with "2" argument(s): "Exception calling
                        "ExecuteWithResults" with "1" argument(s): "Execute with results failed for Database 'pFors'.
                        "" ---> System.Management.Automation.MethodInvocationException: Exception calling "Query" with
                        "2" argument(s): "Exception calling "ExecuteWithResults" with "1" argument(s): "Execute with
                        results failed for Database 'database'. "" --->
                        System.Management.Automation.MethodInvocationException: Exception calling "ExecuteWithResults"
                        with "1" argument(s): "Execute with results failed for Database 'database'. " --->
                        Microsoft.SqlServer.Management.Smo.FailedOperationException: Execute with results failed for
                        Database 'database'.  ---> Microsoft.SqlServer.Management.Common.ExecutionFailureException: An
                        exception occurred while executing a Transact-SQL statement or batch. --->
                        System.Data.SqlClient.SqlException: Invalid column name 'encryption_type'. 

#>

Environmental data

jpomfret commented 6 years ago

Will look into it! Thanks

MikeyBronowski commented 6 years ago

It seems the issue still persist. I'have taken the SQL code outside and the object validation is prior any WHERE clause regarding SQL version check. I have fixed on my own and seems to be working properly by wrapping the commands in EXEC()

IF @sqlVersion >= '12' BEGIN -- remove memory optimized tables EXEC('DELETE tdc FROM ##testdbacompression tdc INNER JOIN sys.tables t ON SCHEMA_NAME(t.schema_id) = tdc.[Schema] AND t.name = tdc.TableName WHERE t.is_memory_optimized = 1') END IF @sqlVersion >= '13' BEGIN -- remove tables with encrypted columns EXEC('DELETE tdc FROM ##testdbacompression tdc INNER JOIN sys.tables t ON SCHEMA_NAME(t.schema_id) = tdc.[Schema] AND t.name = tdc.TableName INNER JOIN sys.columns c ON t.object_id = c.object_id WHERE encryption_type IS NOT NULL') END IF @sqlVersion >= '14' BEGIN -- remove graph (node/edge) tables EXEC('DELETE tdc FROM ##testdbacompression tdc INNER JOIN sys.tables t ON tdc.[Schema] = SCHEMA_NAME(t.schema_id) AND tdc.TableName = t.name WHERE (is_node = 1 OR is_edge = 1)') END

MikeyBronowski commented 6 years ago

This is using Version 385 but @SQLDBAWithABeard made it working on SQL 2014.

function Test-DbaDbCompression {
    <#
    .SYNOPSIS
        Returns tables and indexes with preferred compression setting.
     .DESCRIPTION
        This function returns the results of a full table/index compression analysis.
        This function returns the best option to date for either NONE, Page, or Row Compression.
        Remember Uptime is critical, the longer uptime, the more accurate the analysis is.
        You would probably be best if you utilized Get-DbaUptime first, before running this command.

        Test-DbaCompression script derived from GitHub and the tigertoolbox
        (https://github.com/Microsoft/tigertoolbox/tree/master/Evaluate-Compression-Gains)
        In the output, you will find the following information:
        Column Percent_Update shows the percentage of update operations on a specific table, index, or partition,
        relative to total operations on that object. The lower the percentage of Updates
        (that is, the table, index, or partition is infrequently updated), the better candidate it is for page compression.
        Column Percent_Scan shows the percentage of scan operations on a table, index, or partition, relative to total
        operations on that object. The higher the value of Scan (that is, the table, index, or partition is mostly scanned),
        the better candidate it is for page compression.
        Column Compression_Type_Recommendation can have four possible outputs indicating where there is most gain,
        if any: 'PAGE', 'ROW', 'NO_GAIN' or '?'. When the output is '?' this approach could not give a recommendation,
        so as a rule of thumb I would lean to ROW if the object suffers mainly UPDATES, or PAGE if mainly INSERTS,
        but this is where knowing your workload is essential. When the output is 'NO_GAIN' well, that means that according
        to sp_estimate_data_compression_savings no space gains will be attained when compressing, as in the above output example,
        where compressing would grow the affected object.

        Note: Note that this script will execute on the context of the current database.
        Also be aware that this may take awhile to execute on large objects, because if the IS locks taken by the
        sp_estimate_data_compression_savings cannot be honored, the SP will be blocked.

    .PARAMETER SqlInstance
        SQL Server name or SMO object representing the SQL Server to connect to. This can be a collection and receive pipeline input to allow the function to be executed against multiple SQL Server instances.

    .PARAMETER SqlCredential
        Login to the target instance using alternative credentials. Windows and SQL Authentication supported. Accepts credential objects (Get-Credential)

    .PARAMETER Database
        The database(s) to process - this list is autopopulated from the server. If unspecified, all databases will be processed.

    .PARAMETER ExcludeDatabase
        The database(s) to exclude - this list is autopopulated from the server

    .PARAMETER EnableException
        By default, when something goes wrong we try to catch it, interpret it and give you a friendly warning message.
        This avoids overwhelming you with "sea of red" exceptions, but is inconvenient because it basically disables advanced scripting.
        Using this switch turns this "nice by default" feature off and enables you to catch exceptions with your own try/catch.

    .NOTES
        Author: Jason Squires (@js_0505, jstexasdba@gmail.com)
        Tags: Compression, Table, Database
        Website: https://dbatools.io
        Copyright: (C) Chrissy LeMaire, clemaire@gmail.com
        License: MIT https://opensource.org/licenses/MIT

    .LINK
        https://dbatools.io/Test-DbaCompression

    .EXAMPLE
        Test-DbaCompression -SqlInstance localhost

        Returns all user database files and free space information for the local host

    .EXAMPLE
        Test-DbaCompression -SqlInstance ServerA -Database DBName | Out-GridView
        Returns results of all potential compression options for a single database
        with the recommendation of either Page or Row into and nicely formatted GridView

    .EXAMPLE
        Test-DbaCompression -SqlInstance ServerA
        Returns results of all potential compression options for all databases
        with the recommendation of either Page or Row
    .EXAMPLE
        $cred = Get-Credential sqladmin
        Test-DbaCompression -SqlInstance ServerA -ExcludeDatabase Database -SqlCredential $cred
        Returns results of all potential compression options for all databases
        with the recommendation of either Page or Row

    .EXAMPLE
        $servers = 'Server1','Server2'
        foreach ($svr in $servers)
        {
            Test-DbaCompression -SqlInstance $svr | Export-Csv -Path C:\temp\CompressionAnalysisPAC.csv -Append
        }

        This produces a full analysis of all your servers listed and is pushed to a csv for you to
        analyze.
#>
    [CmdletBinding(DefaultParameterSetName = "Default")]
    param (
        [parameter(Mandatory = $true, ValueFromPipeline = $true)]
        [Alias("ServerInstance", "SqlServer")]
        [DbaInstanceParameter[]]$SqlInstance,
        [PSCredential]$SqlCredential,
        [object[]]$Database,
        [object[]]$ExcludeDatabase,
        [Alias('Silent')]
        [switch]$EnableException
    )

    begin {
        Write-Message -Level System -Message "Bound parameters: $($PSBoundParameters.Keys -join ", ")"
    }

    process {

        foreach ($instance in $SqlInstance) {
            try {
                Write-Message -Level VeryVerbose -Message "Connecting to $instance" -Target $instance
                $server = Connect-SqlInstance -SqlInstance $instance -SqlCredential $SourceSqlCredential -MinimumVersion 10
            }
            catch {
                Stop-Function -Message "Failed to process Instance $Instance" -ErrorRecord $_ -Target $instance -Continue
            }

            $Server.ConnectionContext.StatementTimeout = 0

            [long]$instanceVersionNumber = $($server.VersionString).Replace(".", "")

            #If SQL Server 2016 SP1 (13.0.4001.0) or higher every version supports compression.
            if ($Server.EngineEdition -ne "EnterpriseOrDeveloper" -and $instanceVersionNumber -lt 13040010) {
                Stop-Function -Message "Compresison before SQLServer 2016 SP1 (13.0.4001.0) is only supported by enterprise, developer or evaluation edition. $Server has version $($server.VersionString) and edition is $($Server.EngineEdition)." -Target $db -Continue
            }

            $sql = @"
SET NOCOUNT ON;

            IF OBJECT_ID('tempdb..##testdbacompression', 'U') IS NOT NULL
                DROP TABLE ##testdbacompression

            IF OBJECT_ID('tempdb..##tmpEstimateRow', 'U') IS NOT NULL
                DROP TABLE ##tmpEstimateRow

            IF OBJECT_ID('tempdb..##tmpEstimatePage', 'U') IS NOT NULL
                DROP TABLE ##tmpEstimatePage

            CREATE TABLE ##testdbacompression (
                [Schema] SYSNAME
                ,[TableName] SYSNAME
                ,[IndexName] SYSNAME NULL
                ,[Partition] INT
                ,[IndexID] INT
                ,[IndexType] VARCHAR(25)
                ,[PercentScan] SMALLINT
                ,[PercentUpdate] SMALLINT
                ,[RowEstimatePercentOriginal] BIGINT
                ,[PageEstimatePercentOriginal] BIGINT
                ,[CompressionTypeRecommendation] VARCHAR(7)
                ,SizeCurrent BIGINT
                ,SizeRequested BIGINT
                ,PercentCompression NUMERIC(10, 2)
                );

            CREATE TABLE ##tmpEstimateRow (
                objname SYSNAME
                ,schname SYSNAME
                ,indid INT
                ,partnr INT
                ,SizeCurrent BIGINT
                ,SizeRequested BIGINT
                ,SampleCurrent BIGINT
                ,SampleRequested BIGINT
                );

            CREATE TABLE ##tmpEstimatePage (
                objname SYSNAME
                ,schname SYSNAME
                ,indid INT
                ,partnr INT
                ,SizeCurrent BIGINT
                ,SizeRequested BIGINT
                ,SampleCurrent BIGINT
                ,SampleRequested BIGINT
                );

            INSERT INTO ##testdbacompression (
                [Schema]
                ,[TableName]
                ,[IndexName]
                ,[Partition]
                ,[IndexID]
                ,[IndexType]
                ,[PercentScan]
                ,[PercentUpdate]
                )
                SELECT s.NAME AS [Schema]
                ,t.NAME AS [TableName]
                ,x.NAME AS [IndexName]
                ,p.partition_number AS [Partition]
                ,x.Index_ID AS [IndexID]
                ,x.type_desc AS [IndexType]
                ,NULL AS [PercentScan]
                ,NULL AS [PercentUpdate]
            FROM sys.tables t
            INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
            INNER JOIN sys.indexes x ON x.object_id = t.object_id
            INNER JOIN sys.partitions p ON x.object_id = p.object_id
                AND x.Index_ID = p.Index_ID
            WHERE objectproperty(t.object_id, 'IsUserTable') = 1
                AND p.data_compression_desc = 'NONE'
                AND p.rows > 0
            ORDER BY [TableName] ASC;

            DECLARE @sqlVersion int
            SELECT @sqlVersion = substring(CONVERT(VARCHAR,SERVERPROPERTY('ProductVersion')),0,CHARINDEX('.',(CONVERT(VARCHAR,SERVERPROPERTY('ProductVersion')))))
            IF @sqlVersion >= '12'
                BEGIN
                    -- remove memory optimized tables
                    DELETE tdc
                    FROM ##testdbacompression tdc
                    INNER JOIN sys.tables t
                        ON SCHEMA_NAME(t.schema_id) = tdc.[Schema]
                        AND t.name = tdc.TableName
                    WHERE t.is_memory_optimized = 1
                END
"@ 

## Don't add the version check if running on version 12 as the column doesnt exist and it wont parse
if ($Server.Version.Major -le 12) {
    $sql = $sql
}
else{
    $Sql = $sql + @"
    IF @sqlVersion >= '13'
    BEGIN
        -- remove tables with encrypted columns
        DELETE tdc
        FROM ##testdbacompression tdc
        INNER JOIN sys.tables t
            ON SCHEMA_NAME(t.schema_id) = tdc.[Schema]
            AND t.name = tdc.TableName
        INNER JOIN sys.columns c
            ON t.object_id = c.object_id
        WHERE encryption_type IS NOT NULL
    END
IF @sqlVersion >= '14'
    BEGIN
        -- remove graph (node/edge) tables
        DELETE tdc
        FROM ##testdbacompression tdc
        INNER JOIN sys.tables t
            ON tdc.[Schema] = SCHEMA_NAME(t.schema_id)
            AND tdc.TableName = t.name
        WHERE (is_node = 1 OR is_edge = 1)
    END
"@
}
  $SQl = $Sql + @"

  DECLARE @schema SYSNAME
  ,@tbname SYSNAME
  ,@ixid INT

DECLARE cur CURSOR FAST_FORWARD
FOR
SELECT [Schema]
  ,[TableName]
  ,[IndexID]
FROM ##testdbacompression

OPEN cur

FETCH NEXT
FROM cur
INTO @schema
  ,@tbname
  ,@ixid

WHILE @@FETCH_STATUS = 0
BEGIN
  DECLARE @sqlcmd NVARCHAR(500)

  SET @sqlcmd = 'EXEC sp_estimate_data_compression_savings ''' + @schema + ''', ''' + @tbname + ''', ''' + cast(@ixid AS VARCHAR) + ''', NULL, ''ROW''';

  INSERT INTO ##tmpEstimateRow (
      objname
      ,schname
      ,indid
      ,partnr
      ,SizeCurrent
      ,SizeRequested
      ,SampleCurrent
      ,SampleRequested
      )
  EXECUTE sp_executesql @sqlcmd

  SET @sqlcmd = 'EXEC sp_estimate_data_compression_savings ''' + @schema + ''', ''' + @tbname + ''', ''' + cast(@ixid AS VARCHAR) + ''', NULL, ''PAGE''';

  INSERT INTO ##tmpEstimatePage (
      objname
      ,schname
      ,indid
      ,partnr
      ,SizeCurrent
      ,SizeRequested
      ,SampleCurrent
      ,SampleRequested
      )
  EXECUTE sp_executesql @sqlcmd

  FETCH NEXT
  FROM cur
  INTO @schema
      ,@tbname
      ,@ixid
END

CLOSE cur

DEALLOCATE cur;

--Update usage and partition_number - If database was restore the sys.dm_db_index_operational_stats will be empty until tables have accesses. Executing the sp_estimate_data_compression_savings first will make those entries appear
UPDATE ##testdbacompression
SET  [PercentScan] = i.range_scan_count * 100.0 / NULLIF((i.range_scan_count + i.leaf_insert_count + i.leaf_delete_count + i.leaf_update_count + i.leaf_page_merge_count + i.singleton_lookup_count), 0)
  ,[PercentUpdate] = i.leaf_update_count * 100.0 / NULLIF((i.range_scan_count + i.leaf_insert_count + i.leaf_delete_count + i.leaf_update_count + i.leaf_page_merge_count + i.singleton_lookup_count), 0)
FROM sys.dm_db_index_operational_stats(db_id(), NULL, NULL, NULL) i
INNER JOIN ##testdbacompression tmp ON OBJECT_ID(tmp.TableName) = i.[object_id]
  AND tmp.IndexID = i.index_id;

WITH tmp_cte (
  objname
  ,schname
  ,indid
  ,pct_of_orig_row
  ,pct_of_orig_page
  ,SizeCurrent
  ,SizeRequested
  )
AS (
  SELECT tr.objname
      ,tr.schname
      ,tr.indid
      ,(tr.SampleRequested * 100) / CASE
          WHEN tr.SampleCurrent = 0
              THEN 1
          ELSE tr.SampleCurrent
          END AS pct_of_orig_row
      ,(tp.SampleRequested * 100) / CASE
          WHEN tp.SampleCurrent = 0
              THEN 1
          ELSE tp.SampleCurrent
          END AS pct_of_orig_page
      ,tr.SizeCurrent
      ,tr.SizeRequested
  FROM ##tmpestimaterow tr
  INNER JOIN ##tmpestimatepage tp ON tr.objname = tp.objname
      AND tr.schname = tp.schname
      AND tr.indid = tp.indid
      AND tr.partnr = tp.partnr
  )
UPDATE ##testdbacompression
SET [RowEstimatePercentOriginal] = tcte.pct_of_orig_row
  ,[PageEstimatePercentOriginal] = tcte.pct_of_orig_page
  ,SizeCurrent = tcte.SizeCurrent
  ,SizeRequested = tcte.SizeRequested
  ,PercentCompression = 100 - (cast(tcte.[SizeRequested] AS NUMERIC(21, 2)) * 100 / (tcte.[SizeCurrent] - ABS(SIGN(tcte.[SizeCurrent])) + 1))
FROM tmp_cte tcte
  ,##testdbacompression tcomp
WHERE tcte.objname = tcomp.TableName
  AND tcte.schname = tcomp.[schema]
  AND tcte.indid = tcomp.IndexID;

WITH tmp_cte2 (
  TableName
  ,[schema]
  ,IndexID
  ,[CompressionTypeRecommendation]
  )
AS (
  SELECT TableName
      ,[schema]
      ,IndexID
      ,CASE
          WHEN [RowEstimatePercentOriginal] >= 100
              AND [PageEstimatePercentOriginal] >= 100
              THEN 'NO_GAIN'
          WHEN [PercentUpdate] >= 10
              THEN 'ROW'
          WHEN [PercentScan] <= 1
              AND [PercentUpdate] <= 1
              AND [RowEstimatePercentOriginal] < [PageEstimatePercentOriginal]
              THEN 'ROW'
          WHEN [PercentScan] <= 1
              AND [PercentUpdate] <= 1
              AND [RowEstimatePercentOriginal] > [PageEstimatePercentOriginal]
              THEN 'PAGE'
          WHEN [PercentScan] >= 60
              AND [PercentUpdate] <= 5
              THEN 'PAGE'
          WHEN [PercentScan] <= 35
              AND [PercentUpdate] <= 5
              THEN '?'
          ELSE 'ROW'
          END
  FROM ##testdbacompression
  )
UPDATE ##testdbacompression
SET [CompressionTypeRecommendation] = tcte2.[CompressionTypeRecommendation]
FROM tmp_cte2 tcte2
  ,##testdbacompression tcomp2
WHERE tcte2.TableName = tcomp2.TableName
  AND tcte2.[schema] = tcomp2.[schema]
  AND tcte2.IndexID = tcomp2.IndexID;

SET NOCOUNT ON;

SELECT DBName = DB_Name()
  ,[Schema]
  ,[TableName]
  ,[IndexName]
  ,[Partition]
  ,[IndexID]
  ,[IndexType]
  ,[PercentScan]
  ,[PercentUpdate]
  ,[RowEstimatePercentOriginal]
  ,[PageEstimatePercentOriginal]
  ,[CompressionTypeRecommendation]
  ,SizeCurrentKB = [SizeCurrent]
  ,SizeRequestedKB = [SizeRequested]
  ,PercentCompression
FROM ##testdbacompression;

IF OBJECT_ID('tempdb..##setdbacompression', 'U') IS NOT NULL
  DROP TABLE ##testdbacompression

IF OBJECT_ID('tempdb..##tmpEstimateRow', 'U') IS NOT NULL
  DROP TABLE ##tmpEstimateRow

IF OBJECT_ID('tempdb..##tmpEstimatePage', 'U') IS NOT NULL
  DROP TABLE ##tmpEstimatePage;

"@         
            #If IncludeSystemDBs is true, include systemdbs
            #look at all databases, online/offline/accessible/inaccessible and tell user if a db can't be queried.
            try {
                $dbs = $server.Databases | Where-Object IsAccessible

                if ($Database) {
                    $dbs = $dbs | Where-Object { $Database -contains $_.Name -and $_.IsSystemObject -eq 0 }
                }

                else {
                    $dbs = $dbs | Where-Object { $_.IsSystemObject -eq 0 }
                }

                if (Test-Bound "ExcludeDatabase") {
                    $dbs = $dbs | Where-Object Name -NotIn $ExcludeDatabase
                }
            }
            catch {
                Stop-Function -Message "Unable to gather list of databases for $instance" -Target $instance -ErrorRecord $_ -Continue
            }

            foreach ($db in $dbs) {
                try {
                    $dbCompatibilityLevel = [int]($db.CompatibilityLevel.ToString().Replace('Version', ''))

                    Write-Message -Level Verbose -Message "Querying $instance - $db"
                    if ($db.status -ne 'Normal' -or $db.IsAccessible -eq $false) {
                        Write-Message -Level Warning -Message "$db is not accessible." -Target $db
                        Continue
                    }

                    if ($dbCompatibilityLevel -lt 100) {
                        Stop-Function -Message "$db has a compatibility level lower than Version100 and will be skipped." -Target $db -Continue
                        Continue
                    }
                    #Execute query against individual database and add to output
                    foreach ($row in ($server.Query($sql, $db.Name))) {
                        [pscustomobject]@{
                            ComputerName                  = $server.ComputerName
                            InstanceName                  = $server.ServiceName
                            SqlInstance                   = $server.DomainInstanceName
                            Database                      = $row.DBName
                            Schema                        = $row.Schema
                            TableName                     = $row.TableName
                            IndexName                     = $row.IndexName
                            Partition                     = $row.Partition
                            IndexID                       = $row.IndexID
                            IndexType                     = $row.IndexType
                            PercentScan                   = $row.PercentScan
                            PercentUpdate                 = $row.PercentUpdate
                            RowEstimatePercentOriginal    = $row.RowEstimatePercentOriginal
                            PageEstimatePercentOriginal   = $row.PageEstimatePercentOriginal
                            CompressionTypeRecommendation = $row.CompressionTypeRecommendation
                            SizeCurrent                   = [dbasize]($row.SizeCurrentKB * 1024)
                            SizeRequested                 = [dbasize]($row.SizeRequestedKB * 1024)
                            PercentCompression            = $row.PercentCompression
                        }
                    }
                }
                catch {
                    Stop-Function -Message "Unable to query $instance - $db" -Target $db -ErrorRecord $_ -Continue
                }
            }
        }
    }
}
SQLDBAWithABeard commented 6 years ago

Only tested on SQL2012, SQL 2014, SQL2016 and SQL2017

jpomfret commented 6 years ago

Just pulled down the latest development branch and set\Test-DbaDbCompression is working on 2008R2, 2012, 2014, 2016 and 2017. I don't see any recent changes to the code though, am I missing something? Is it tripping on a specific object type?

SQLDBAWithABeard commented 6 years ago

It was erroring when parsing, on the encryption type column I think. It did the same when we pulled the tsql into SSMS

@boosi83 watched your session at ps power hour and wanted to try it. He can give more info I expect

MikeyBronowski commented 6 years ago

@jpomfret I will give it a try tomorrow and let you know. Can you tell me where I can find the dev branch?

jpomfret commented 6 years ago

The version of code I tested showed no changes in the last month so laest module version should be the same. Can you screenshot the error? (redact any sensitive data). I'll be online tomorrow if you want to chat on slack.

MikeyBronowski commented 6 years ago

@jpomfret It was very similar to the first error in this thread, however I've got warning:

Warning: Invalid column name 'encryption_type' Invalid column name 'is_node' Invalid column name 'is_edge'

jpomfret commented 6 years ago

I'm having trouble replicating this error. Can you do Get-Module and see which version of dbatools you're using. Also you're running against SQL Server 2014, is that correct? Are the databases in 2014 compat mode (not sure that affects this but just curious.)

@SQLDBAWithABeard were you able to replicate the error @MikeyBronowski is getting?

MikeyBronowski commented 6 years ago

I ran some tests on my other machine with SQL 2016 (Microsoft SQL Server 2016 (SP2) (KB4052908) - 13.0.5026.0 (X64)) with db matching cmpt_lvl. Script 0.9.393 dbatools

  1. VSCode has two outputs for my server:

Using Test-DbaDBCompression -SqlInstance localhost -Database Football WARNING: [10:36:03][Test-DbaDbCompression] Unable to query localhost - [Football] | Invalid column name 'is_node'. Invalid column name 'is_edge'.

Using Test-DbaDBCompression -SqlInstance Instance01 -Database Football Runs ok, producing results as expected.

  1. PowershellISE in both cases returns the compression results.
potatoqualitee commented 6 years ago

I am unable to repro any issues with sql2014. And I noticed, @MikeyBronowski that ISE works for you but Code does not. I have no idea how Code could mess up a T-SQL query, but is this possible?

What happens when you open up plain ol powershell.exe and use that?

potatoqualitee commented 6 years ago

@SQLDBAWithABeard - shall we just copy/paste that code above that you provided?

SQLDBAWithABeard commented 6 years ago

@jpomfret I was sat next to Mikey so I saw it.

It was also happening in SSMS with the raw T-SQL

@cl if it doesn't cause any issues then we could. Also on the prod machine we were using this was happening in ISE. So is a bit odd no-one else can repro

jpomfret commented 6 years ago

Man that is so strange. I had added some logic so depending on version it checked diff cols.. with the idea it doesn't look for cols that don't exist yet. I'll try and pull out the sql later and see if i can break it.

SQLDBAWithABeard commented 6 years ago

Nope very odd, I have just put dbatools onto a sql2014 container and I dont get the error either even with localhost. But I did see it :-) with Mikey - must be something with that particular box - not sure what OS it is using

niphlod commented 6 years ago

By any chance could it be dependant on database version rather than sql version?! ;-) jm2c

Il dom 26 ago 2018, 15:21 Rob Sewell notifications@github.com ha scritto:

Nope very odd, I have just put dbatools onto a sql2014 container and I dont get the error either even with localhost. But I did see it :-) with Mikey - must be something with that particular box - not sure what OS it is using

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/sqlcollaborative/dbatools/issues/3698#issuecomment-416038462, or mute the thread https://github.com/notifications/unsubscribe-auth/AAHdBwlAXKkpxkWL8AEvIbqT80ilP0Gtks5uUqDFgaJpZM4UtaaW .

jpomfret commented 6 years ago

Can you replicate it @niphlod ? I tried with changing the compat level and still didn't get the errors

niphlod commented 6 years ago

Nope,was a wild guess

Il mer 29 ago 2018, 19:27 Jess Pomfret notifications@github.com ha scritto:

Can you replicate it @niphlod https://github.com/niphlod ? I tried with changing the compat level and still didn't get the errors/

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/sqlcollaborative/dbatools/issues/3698#issuecomment-417036834, or mute the thread https://github.com/notifications/unsubscribe-auth/AAHdB4So0WzWs1EFUjq_lGtHxZw3mm6-ks5uVs7-gaJpZM4UtaaW .

jpomfret commented 6 years ago

Great news @MikeyBronowski you're not crazy!! I have no idea why but I am now sometimes able to reproduce this error. Just submitted a PR that should resolve this. If you are able to test and confirm that would be super.

MikeyBronowski commented 6 years ago

@jpomfret That's really great news - you are the first one who said I am not crazy :-)

I will look at the thing later.

MikeyBronowski commented 6 years ago

@jpomfret Just ran it on my SQL 2016 where version#0.9.431 gave me an error with node/edge columns and #0.9.475 went through without warnings/errors.

I will check that on older SQLs too.

jpomfret commented 6 years ago

Awesome @potatoqualitee, ok to close this now?

niphlod commented 6 years ago

closing this. @MikeyBronowski , @jpomfret , @potatoqualitee , feel free to reopen if the issue still stands