Showing posts with label SQL. Show all posts
Showing posts with label SQL. Show all posts

Saturday, August 27, 2022

Recover the currupted table data from database

 Hello Friends,

 For some reason, maybe due to sudden computer shutdown/crash or a forced shutdown, one of my client BizTalk DEV virtual machines presented strange behaviors this morning. The Host Instances were always restarting for no apparent reason. When I started to diagnose the problem, and inspect the machine Event Viewer I found the following error:

SQL Server detected a logical consistency-based I/O error: incorrect pageid (expected 1:1848; actual 0:0). It occurred during a read of page (1:1848) in database ID 10 at offset 0x00000000e70000 in file ‘C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\BizTalkMsgBoxDb.mdf’.  Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

Start receive error while run select query on some specific record

SQL Server detected a logical consistency-based I/O error: incorrect pageid (expected 1:19473258; actual 1:5603466).

It occurred during a read of page (1:19473258) in database ID 7 at offset 0x000025246d4000 in file 'F:\Data\xxxxx_nav2016_Live_Data.mdf'. 

Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately.

Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors;

for more information, see SQL Server Books Online.

 

While in NAV you will get this error for table have issue







There are few page corrupt in database for table

To solve this issue we can run below steps

https://blog.sandro-pereira.com/2017/07/25/sql-server-detected-a-logical-consistency-based-biztalkmsgboxdb-database/

 Set the database to Single User

You can do using query OR manual from database property

 

ALTER DATABASE [DB_NAME]

SET SINGLE_USER;

GO

 

Or we can do it from UI

Right Click On Database à Properties à Options à Restrict Access à Select Single User































Run the below Query for Table where Record is corrupted


DBCC CHECKTABLE([Table Name], REPAIR_ALLOW_DATA_LOSS)

GO

 

Note: Above command run with data loss, so make sure that you take backup before run this command and system will delete the records which page is corrupted and system can’t able to read that data

Once this command run you can able to run query properly but few records deleted which is  corrupted – so you need to get it back from old database backup

 

We found the missing record and insert into new table (using insert into [new table name])

After we generate the insert record query from temp table using Generate Script Open from new table (Only Data)

Run this query to create record

 Run the below Query to set database as multiple user

 

ALTER DATABASE BizTalkMsgBoxDb

SET MULTI_USER;

GO

 

Or same way you can do it from UI


======


SELECT * FROM [msdb].[dbo].[suspect_pages]



DBCC TRACEON (3604);

DBCC PAGE (7, 1, 23773535, 0);

DBCC TRACEOFF (3604);

GO



SELECT OBJECT_NAME (808167912);



=============

Run Check bd command to found error in database page

DBCC CHECKDB

This will take 1 or 2 hours for around 200 GB database
So if any error found at last then repair that page or table





















==============


Done....

System will recover the old data or it might possible that curreupted raw can be deleted from database

Thank you for reading...

I hope this help to someone in future...

Keep Sharing....Keep Growing....


Thursday, August 19, 2021

Optimize the SQL Database Table Performation by Rebuild Query - Dynamics NAV SQL Optimization

 Hello Experts,

Run the below query on SQL DB for optimization

Take the full backup of database before run this query

================================================

DECLARE @sql VARCHAR(MAX), @i INT, @init INT, @table VARCHAR(50), @indextype VARCHAR(50), @indexname VARCHAR(500), @fillfactor INT

--DROP TABLE ##t



IF OBJECT_ID('tempdb..##t') IS NOT NULL

      DROP TABLE ##t

 


SELECT

ROW_NUMBER() OVER(ORDER BY i.fill_factor DESC) as ID

, DB_NAME() AS Database_Name

, sc.name AS Schema_Name

, o.name AS Table_Name

, o.type_desc

, i.name AS Index_Name

, i.type_desc AS Index_Type

, i.fill_factor

, 0 AS [Status]

, CAST('' AS VARCHAR(1000)) AS ErrorMEssage

INTO ##t

FROM sys.indexes i

INNER JOIN sys.objects o ON i.object_id = o.object_id

INNER JOIN sys.schemas sc ON o.schema_id = sc.schema_id

WHERE i.name IS NOT NULL

AND o.type = 'U'

ORDER BY i.fill_factor DESC, o.name


SELECT @i = MAX(ID), @init = 1 FROM ##t 


WHILE @i >= @init

BEGIN

SELECT @table = Table_Name, @indexname = Index_Name, @indextype = Index_Type, @fillfactor = fill_factor FROM ##t WHERE ID = @init


IF @indextype = 'CLUSTERED' AND @fillfactor NOT IN (0,100)

SET @sql = 'ALTER INDEX ['+@indexname+'] ON [dbo].['+@table+'] REBUILD PARTITION = ALL WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100)'

ELSE IF @indextype = 'NONCLUSTERED' AND @fillfactor NOT IN (0)

SET @sql = 'ALTER INDEX ['+@indexname+'] ON [dbo].['+@table+'] REBUILD PARTITION = ALL WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90)'


BEGIN TRY

PRINT 'Starting index rebuild of number ' + CAST(@init AS VARCHAR(10)) + ' out of ' + CAST(@i AS VARCHAR(10)) + ' Table Name = ' + @table

PRINT @sql

EXEC (@sql)

UPDATE ##t SET [Status] = 1 WHERE ID = @init

END TRY

BEGIN CATCH

UPDATE ##t SET ErrorMEssage = ERROR_MESSAGE() WHERE ID = @init

END CATCH


SET @init += 1

END

=====================================================

This query will take some time depend on size of database



I hope it will help someone.

Keep Sharing...Keep Growing...


====================================================

Perform following Steps to Reduce Table Lock: (Dynamics NAV)

  1. Run above query to rebuild table index (run in night time – we can setup this query to run at every weekend)
  2. Increase the table lock time waiting period (default it is 10 second – we can changes to 30 second) From Database à Alter à Advance à Timeout Duration
  3. Check our custom code – if it is running in long loop for some standard  calculation than commit in code to release table lock
  4. Verify that there is no CONFIRM command used in- between long running code- it is cause the major issue in table lock
  5. We need to identify the Table Name – for that user getting table lock issue (We can run SQL Query  EXEC sp_who2  identify the running sql command and check the query executing cause the table lock)
  6. Re-transfer the full SQL Database to fresh DB – It will reduce database size to 40% and increase performance (this is optional steps we can do if relay require)
  7. Check our long executive code and add command SetCurrentKey to optimize it

Tuesday, December 19, 2017

Error importing NAV License in AzureSQL Database

Hello Friends,

When you try to open Azure SQL Database on NAV Developed Environment and RTC Client.
It will be give multiple error. Following are solution for it.










When opening a database installed in AzureSQL, you may get this error:
The database on the server requires a Per Database license to be opened on SQL Azure
You must use Powershell to import the Dynamics NAV license. To import a license file to a AzureSQL database I used the following command:
Import-NAVServerLicense DynamicsNAV100 -LicenseData ([Byte[]]$(Get-Content -Path “C:\Licenses\NAV_2017_DEV.flf” -Encoding Byte))
But I was having the following error:
Reference to database and/or server name in ‘master.dbo.$ndo$srvproperty’ is not supported in this version of SQL Server
To resolve it, I needed to add an additional parameter to the command: “-Database 2”.
Import-NAVServerLicense DynamicsNAV100 -LicenseData ([Byte[]]$(Get-Content -Path “C:\Licenses\NAV_2017_DEV.flf” -Encoding Byte)) -Database 2

From Microsoft Dynamics NAV 2016 documentation:

-Database<LicenseDatabase>

Specifies the database into which to import the license file. The possible values are described in the following list:
Default = 0
Default; overrides the license file currently in use.
Master = 1
Forces the license file to be global.
NavDatabase = 2
Forces the license file to be local and stored in the Microsoft Dynamics NAV database that is used by the specified Microsoft Dynamics NAV Server instance.
Tenant = 3
Forces the license file to be local and stored in the Microsoft Dynamics NAV database that is used by the tenant that is specified in the Tenant parameter.

====================================================
Tips to run NAV Admin Shell & Developer Shell to  Power Shell ISE

Run the Import Module Command to load function.

NAV 2016
--------
For Development Shell -->

Import-Module 'C:\Program Files (x86)\Microsoft Dynamics NAV\90\RoleTailored Client\NavModelTools.ps1'

For Administratertion shell -->

Import-Module 'C:\Program Files\Microsoft Dynamics NAV\90\Service\NavAdminTool.ps1'


Sunday, November 5, 2017

Possible Solution for Big Size Dynamics NAV Database (Performation Issue)

Hello Friends,

Everyone once in a while, we will get a support call from a customer about archiving their historical data.
Dynamics NAV (formerly Navision) has been in the US since 1996 and for customers that were first to adopt Dynamics NAV, they have kept those data with them through the years.
Some will argue that with the cost of storage in decline, why is it necessary to archive old data. While I tend to agree with that statement, but trying to manage a 700 GB database backup, if nothing else, very time consuming.
There are a couple of ways to “shrink” the size of your database by eliminating the data within Dynamics NAV.

Data Compression






























The date compression processes will consolidate multiple entries in the table in question into one entry.

The problem with date compression process is that it takes a loooong time to run. It takes so long to run that we usually just stop the process. The design of the Date Compression process seem to want companies to run it periodically when they start using NAV instead of running it when you’re database is 700GB. This is never the case.


Most Dynamics NAV consultants out there will never recommend their clients to use Date Compression (me included). One of the main reasons is because in the prior versions of Navision caused data problems when you did date compression when you try to upgrade.

Removing Data That Adds No Value


There are data in the database that one can consider as low priority value. You’re keeping the historical data, but the historical data is more “nice to have” instead of regulatory compliance or critical to running the business. These tables include, but are not limited to:

  • Change Log
  • Sales Document Archive
  • Purchase Document Archive
  • E.D.I. Receive Documents
  • E.D.I. Send Document
  • Posted Sales/Purchase Documents
  • Posted Warehouse Documents
  • Warehouse Registered Documents
  • Any posted documents
This is not to say you should delete all of the data, but you can certainly delete those data not required by the tax or audit authorities.

Re-Structure SQL Database

Create blank new database with only objects and create the company by powershell command. Now transfer all the data from old live database to new database using SQL Scripts ( we have done it successfully many times). It will reduce 50 to 70% database size. SQL Full backup and Sequential backup take less time and you will notice performation improvement in NAV. This process has re-structure database in disk and page size become smaller.

Replace the HDD disk drive with SSD disk drive


Upgrade you server computer disk with new SSD Drive with RAID 10 configuration. It will give you around 1000 to 1500 MBPS speed on disk. It will increase your NAV Big database performation to 20 to 50%.

Re-implementation

This is the nuclear option. Basically, start fresh in a new Dynamics NAV database with only your setup data, master data, and opening balances. This option is popular with companies that have been using Dynamics NAV for a long time. It gives an opportunity to eliminate a lot of bad data, in addition, to modifications that are no longer needed.
The historical information is basically kept at the old database environment. If an old data is needed, the user basically goes to the old NAV database to retrieve the information.
But re-implementation is really overkill to specifically address the database size issue.

Conclusion

With the performance and capacity of storage ever increasing and the cost of storage ever declining, these types of question does not come up as often. I suspect as time progresses, these questions will come up once in a very long while.
I typically would recommend companies remove the data that adds little or no value first before attempting to do anything drastic. Usually holding those data takes a ton of storage in the Dynamics NAV database.
There was a company I visited that had 10 GB worth of Change Log entries. Worst is that the customer didn’t even know what the change log is…

Thank you reading.
Please give me your feedback on this.

Sunday, June 11, 2017

SQL QUERY To Identify CPU Use for Query Executing

Hello Friends,

I want share some of my favorite SQL Query to identify SQL Per-formation.

-- Requests by CPU consumption.
SELECT  der.session_id ,
        DB_NAME(der.database_id) AS database_name ,
        deqp.query_plan ,
        SUBSTRING(dest.text, der.statement_start_offset / 2,
                  ( CASE WHEN der.statement_end_offset = -1
                         THEN DATALENGTH(dest.text)
                         ELSE der.statement_end_offset
                    END - der.statement_start_offset ) / 2)
                                        AS [statement executing] ,
        der.cpu_time
      --der.granted_query_memory
      --der.wait_time
      --der.total_elapsed_time
      --der.reads
FROM    sys.dm_exec_requests der
        INNER JOIN sys.dm_exec_sessions des
                       ON des.session_id = der.session_id
        CROSS APPLY sys.dm_exec_sql_text(der.sql_handle) dest
        CROSS APPLY sys.dm_exec_query_plan(der.plan_handle) deqp
WHERE   des.is_user_process = 1
        AND der.session_id <> @@spid
ORDER BY der.cpu_time DESC ;

-----------------------------------------------------------------------------------------------------------------------

SELECT TOP 50
SUBSTRING(eqt.TEXT,(
       eqs.statement_start_offset/2) +1
,((CASE eqs.statement_end_offset
  WHEN -1 THEN DATALENGTH(eqt.TEXT)
  ELSE  eqs.statement_end_offset
  END - eqs.statement_start_offset)/2)+1)
,eqs.execution_count
,eqs.total_logical_reads
,eqs.last_logical_reads
,eqs.total_logical_writes
,eqs.last_logical_writes
,eqs.total_worker_time
,eqs.last_worker_time
,eqs.total_elapsed_time/1000000           AS Total_elapsed_time_Secs
,eqs.last_elapsed_time/1000000            AS Last_elapsed_time_Secs
,eqs.last_execution_time
,eqp.query_plan
FROM        sys.dm_exec_query_stats eqs
CROSS APPLY sys.dm_exec_sql_text(eqs.sql_handle) eqt
CROSS APPLY sys.dm_exec_query_plan(eqs.plan_handle) eqp
ORDER BY eqs.total_worker_time DESC

SELECT TOP 50
SUBSTRING(eqt.TEXT,(
       eqs.statement_start_offset/2) +1
,((CASE eqs.statement_end_offset
  WHEN -1 THEN DATALENGTH(eqt.TEXT)
  ELSE  eqs.statement_end_offset
  END - eqs.statement_start_offset)/2)+1)
,eqs.execution_count
,eqs.total_logical_reads
,eqs.last_logical_reads
,eqs.total_logical_writes
,eqs.last_logical_writes
,eqs.total_worker_time
,eqs.last_worker_time
,eqs.total_elapsed_time/1000000           AS Total_elapsed_time_Secs
,eqs.last_elapsed_time/1000000            AS Last_elapsed_time_Secs
,eqs.last_execution_time
,eqp.query_plan
FROM        sys.dm_exec_query_stats eqs
CROSS APPLY sys.dm_exec_sql_text(eqs.sql_handle) eqt
CROSS APPLY sys.dm_exec_query_plan(eqs.plan_handle) eqp
ORDER BY eqs.total_worker_time DESC
--------------------------------------------------------------------------------------------------------------------------

WITH [Waits] AS (SELECT [wait_type],[wait_time_ms] / 1000.0 AS [WaitS], ([wait_time_ms] - [signal_wait_time_ms]) / 1000.0 AS [ResourceS],
[signal_wait_time_ms] / 1000.0 AS [SignalS],[waiting_tasks_count] AS [WaitCount],100.0 * [wait_time_ms] / SUM ([wait_time_ms]) OVER() AS [Percentage],
ROW_NUMBER() OVER(ORDER BY [wait_time_ms] DESC) AS [RowNum] FROM sys.dm_os_wait_stats
WHERE [wait_type] NOT IN (N'CLR_SEMAPHORE',N'LAZYWRITER_SLEEP',N'RESOURCE_QUEUE',N'SQLTRACE_BUFFER_FLUSH',N'SLEEP_TASK',N'SLEEP_SYSTEMTASK',
N'WAITFOR',N'HADR_FILESTREAM_IOMGR_IOCOMPLETION',N'CHECKPOINT_QUEUE', N'REQUEST_FOR_DEADLOCK_SEARCH',N'XE_TIMER_EVENT',N'XE_DISPATCHER_JOIN',
N'LOGMGR_QUEUE',N'FT_IFTS_SCHEDULER_IDLE_WAIT','BROKER_TASK_STOP', N'CLR_MANUAL_EVENT',N'CLR_AUTO_EVENT',N'DISPATCHER_QUEUE_SEMAPHORE',
N'TRACEWRITE',N'XE_DISPATCHER_WAIT',N'BROKER_TO_FLUSH',N'BROKER_EVENTHANDLER',N'FT_IFTSHC_MUTEX',  N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP',
N'DIRTY_PAGE_POLL',  N'SP_SERVER_DIAGNOSTICS_SLEEP'))
SELECT [W1].[wait_type] AS [Wait_Type],[W1].[WaitCount] AS [Wait_Count],CAST ([W1].[Percentage] AS DECIMAL(4, 2)) AS [Wait_Time_%],
CAST ([W1].[WaitS] AS DECIMAL(14, 2)) AS [Wait_Time_Sec],CAST ([W1].[ResourceS] AS DECIMAL(14, 2)) AS [Resource_wait_Sec],
CAST ([W1].[SignalS] AS DECIMAL(14, 2)) AS [Signal_wait_Sec],GETDATE() AS [Current_Date]
FROM [Waits] AS [W1] INNER JOIN [Waits] AS [W2] ON [W2].[RowNum] <= [W1].[RowNum]
GROUP BY [W1].[RowNum], [W1].[wait_type], [W1].[WaitS],[W1].[ResourceS], [W1].[SignalS], [W1].[WaitCount], [W1].[Percentage]
HAVING SUM ([W2].[Percentage]) - [W1].[Percentage] < 95
order by Wait_Count desc

--------------------------------------------------------------------------------------------------------------------------
--Transaction log impact of active transactions.
SELECT DTST.[session_id],
 DES.[login_name] AS [Login Name],
 DB_NAME (DTDT.database_id) AS [Database],
 DTDT.[database_transaction_begin_time] AS [Begin Time],
 -- DATEDIFF(ms,DTDT.[database_transaction_begin_time], GETDATE()) AS [Duration ms],
 CASE DTAT.transaction_type
   WHEN 1 THEN 'Read/write'
    WHEN 2 THEN 'Read-only'
    WHEN 3 THEN 'System'
    WHEN 4 THEN 'Distributed'
  END AS [Transaction Type],
  CASE DTAT.transaction_state
    WHEN 0 THEN 'Not fully initialized'
    WHEN 1 THEN 'Initialized, not started'
    WHEN 2 THEN 'Active'
    WHEN 3 THEN 'Ended'
    WHEN 4 THEN 'Commit initiated'
    WHEN 5 THEN 'Prepared, awaiting resolution'
    WHEN 6 THEN 'Committed'
    WHEN 7 THEN 'Rolling back'
    WHEN 8 THEN 'Rolled back'
  END AS [Transaction State],
 DTDT.[database_transaction_log_record_count] AS [Log Records],
 DTDT.[database_transaction_log_bytes_used] AS [Log Bytes Used],
 DTDT.[database_transaction_log_bytes_reserved] AS [Log Bytes RSVPd],
 DEST.[text] AS [Last Transaction Text],
 DEQP.[query_plan] AS [Last Query Plan]
FROM sys.dm_tran_database_transactions DTDT
 INNER JOIN sys.dm_tran_session_transactions DTST
   ON DTST.[transaction_id] = DTDT.[transaction_id]
 INNER JOIN sys.[dm_tran_active_transactions] DTAT
   ON DTST.[transaction_id] = DTAT.[transaction_id]
 INNER JOIN sys.[dm_exec_sessions] DES
   ON DES.[session_id] = DTST.[session_id]
 INNER JOIN sys.dm_exec_connections DEC
   ON DEC.[session_id] = DTST.[session_id]
 LEFT JOIN sys.dm_exec_requests DER
   ON DER.[session_id] = DTST.[session_id]
 CROSS APPLY sys.dm_exec_sql_text (DEC.[most_recent_sql_handle]) AS DEST
 OUTER APPLY sys.dm_exec_query_plan (DER.[plan_handle]) AS DEQP
ORDER BY DTDT.[database_transaction_log_bytes_used] DESC;
-- ORDER BY [Duration ms] DESC;
-------------------------------------------------------------------------------------------------------------------------

--Tempdb usage
WITH    version_store ( [rowset_id], [bytes consumed] )
          AS ( SELECT TOP 1
                        [rowset_id] ,
                        SUM([record_length_first_part_in_bytes]
                            + [record_length_second_part_in_bytes])
                                                          AS [bytes consumed]
               FROM     sys.dm_tran_version_store
               GROUP BY [rowset_id]
               ORDER BY SUM([record_length_first_part_in_bytes]
                            + [record_length_second_part_in_bytes])
             )
    SELECT  VS.[rowset_id] ,
            VS.[bytes consumed] ,
            DB_NAME(DTVS.[database_id]) AS [database name] ,
            DTASDT.[session_id] AS session_id ,
            DES.[login_name] AS [session login] ,
            DEST.text AS [session command]
    FROM    version_store VS
            INNER JOIN sys.[dm_tran_version_store] DTVS
                         ON VS.rowset_id = DTVS.[rowset_id]
            INNER JOIN sys.[dm_tran_active_snapshot_database_transactions]
                                                                      DTASDT
                         ON DTVS.[transaction_sequence_num] =
                                           DTASDT.[transaction_sequence_num]
            INNER JOIN sys.dm_exec_connections DEC
                         ON DTASDT.[session_id] = DEC.[most_recent_session_id]
            INNER JOIN sys.[dm_exec_sessions] DES
                         ON DEC.[most_recent_session_id] = DES.[session_id]
            CROSS APPLY sys.[dm_exec_sql_text](DEC.[most_recent_sql_handle])

                                                                       DEST ;

------------------------------------------------------------------

Copy Data from One Table to Another Table on same SQL Server Instance

INSERT INTO [NAVDB_2013R2].[dbo].[ComapnyName_ India$Purch_ Comment Line]
           ([Document Type]
           ,[No_]
           ,[Document Line No_]
           ,[Line No_]
           ,[Date]
           ,[Code]
           ,[Comment])
(SELECT 
            [Document Type]
           ,[No_]
           ,[Document Line No_]
           ,[Line No_]
           ,[Date]
           ,[Code]
           ,[Comment]

FROM [NAVDB_2013].[dbo].[ComapnyName_$Purch_ Comment Line])