SQL Server Replication












3















We had an issue with replication, where a large (7.5 million row) update was done on a 33 million row table. Replication converted that to 7.5 million individual update statements. When my alerting notified me that our publication was behind our threshold, I started to examine the issue.



I discovered that the updated was executed and it would have taken a few days to chew through those update statements. So I decided to see if we could skip over those records that it was trying to process. Using the system tables and stored procedure I was able to determine the period of time that the updated was executed. Once I found the last xact_seqno, I stopped the distribution agent, and did the update manually to the subscriber database. I then executed sp_setsubscriptionxactseqno, to skip past all of those 7.5 million transactions. When I started up the Distribution Agent, it seemed to have worked and was able to process the remaining transactions.



For good measure I use Redgate’s data compare to see if the data was messed up, but I was missing about 24 records (which could have been not there originally because I didn’t set it up).



My question is was that the right way to fix it? How are you always assured to get the next Xact_seqno? Do you order the transactions by Xact_Seqno or entry_time?










share|improve this question
















bumped to the homepage by Community 2 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
















  • Can you please provide details about SQL Server version and type of replication?

    – SQLPRODDBA
    Jun 8 '16 at 12:35











  • The issue was from the distributor to subscriber, which was on the same server (SQL Server 2012). Transactional replication.

    – Derek Czarny
    Jun 8 '16 at 12:40
















3















We had an issue with replication, where a large (7.5 million row) update was done on a 33 million row table. Replication converted that to 7.5 million individual update statements. When my alerting notified me that our publication was behind our threshold, I started to examine the issue.



I discovered that the updated was executed and it would have taken a few days to chew through those update statements. So I decided to see if we could skip over those records that it was trying to process. Using the system tables and stored procedure I was able to determine the period of time that the updated was executed. Once I found the last xact_seqno, I stopped the distribution agent, and did the update manually to the subscriber database. I then executed sp_setsubscriptionxactseqno, to skip past all of those 7.5 million transactions. When I started up the Distribution Agent, it seemed to have worked and was able to process the remaining transactions.



For good measure I use Redgate’s data compare to see if the data was messed up, but I was missing about 24 records (which could have been not there originally because I didn’t set it up).



My question is was that the right way to fix it? How are you always assured to get the next Xact_seqno? Do you order the transactions by Xact_Seqno or entry_time?










share|improve this question
















bumped to the homepage by Community 2 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
















  • Can you please provide details about SQL Server version and type of replication?

    – SQLPRODDBA
    Jun 8 '16 at 12:35











  • The issue was from the distributor to subscriber, which was on the same server (SQL Server 2012). Transactional replication.

    – Derek Czarny
    Jun 8 '16 at 12:40














3












3








3


1






We had an issue with replication, where a large (7.5 million row) update was done on a 33 million row table. Replication converted that to 7.5 million individual update statements. When my alerting notified me that our publication was behind our threshold, I started to examine the issue.



I discovered that the updated was executed and it would have taken a few days to chew through those update statements. So I decided to see if we could skip over those records that it was trying to process. Using the system tables and stored procedure I was able to determine the period of time that the updated was executed. Once I found the last xact_seqno, I stopped the distribution agent, and did the update manually to the subscriber database. I then executed sp_setsubscriptionxactseqno, to skip past all of those 7.5 million transactions. When I started up the Distribution Agent, it seemed to have worked and was able to process the remaining transactions.



For good measure I use Redgate’s data compare to see if the data was messed up, but I was missing about 24 records (which could have been not there originally because I didn’t set it up).



My question is was that the right way to fix it? How are you always assured to get the next Xact_seqno? Do you order the transactions by Xact_Seqno or entry_time?










share|improve this question
















We had an issue with replication, where a large (7.5 million row) update was done on a 33 million row table. Replication converted that to 7.5 million individual update statements. When my alerting notified me that our publication was behind our threshold, I started to examine the issue.



I discovered that the updated was executed and it would have taken a few days to chew through those update statements. So I decided to see if we could skip over those records that it was trying to process. Using the system tables and stored procedure I was able to determine the period of time that the updated was executed. Once I found the last xact_seqno, I stopped the distribution agent, and did the update manually to the subscriber database. I then executed sp_setsubscriptionxactseqno, to skip past all of those 7.5 million transactions. When I started up the Distribution Agent, it seemed to have worked and was able to process the remaining transactions.



For good measure I use Redgate’s data compare to see if the data was messed up, but I was missing about 24 records (which could have been not there originally because I didn’t set it up).



My question is was that the right way to fix it? How are you always assured to get the next Xact_seqno? Do you order the transactions by Xact_Seqno or entry_time?







sql-server sql-server-2012 replication transactional-replication






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jun 8 '16 at 13:40









SQLPRODDBA

1,09511430




1,09511430










asked Jun 8 '16 at 12:08









Derek CzarnyDerek Czarny

161




161





bumped to the homepage by Community 2 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







bumped to the homepage by Community 2 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.















  • Can you please provide details about SQL Server version and type of replication?

    – SQLPRODDBA
    Jun 8 '16 at 12:35











  • The issue was from the distributor to subscriber, which was on the same server (SQL Server 2012). Transactional replication.

    – Derek Czarny
    Jun 8 '16 at 12:40



















  • Can you please provide details about SQL Server version and type of replication?

    – SQLPRODDBA
    Jun 8 '16 at 12:35











  • The issue was from the distributor to subscriber, which was on the same server (SQL Server 2012). Transactional replication.

    – Derek Czarny
    Jun 8 '16 at 12:40

















Can you please provide details about SQL Server version and type of replication?

– SQLPRODDBA
Jun 8 '16 at 12:35





Can you please provide details about SQL Server version and type of replication?

– SQLPRODDBA
Jun 8 '16 at 12:35













The issue was from the distributor to subscriber, which was on the same server (SQL Server 2012). Transactional replication.

– Derek Czarny
Jun 8 '16 at 12:40





The issue was from the distributor to subscriber, which was on the same server (SQL Server 2012). Transactional replication.

– Derek Czarny
Jun 8 '16 at 12:40










1 Answer
1






active

oldest

votes


















0














An alternative would to remove the article from the publication and re add, causing a new snapshot to be taken of the table. This way there's no question whether the right data is in the subscriber.



To do this you need to make sure the publication properties are set to FALSE for allow_anonymous and immediate_sync - if either of these are true then the a database snapshot will be taken when you remove and re add table rather than just the table.



To check run sp_helppublication against the distribution.



To change if either of these are set to 1:



EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication_name
@property = 'allow_anonymous' ,
@value = 'false'
GO
EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication name
@property = 'immediate_sync' ,
@value = 'false'
GO


I have tested this and it works in my environment, however strongly suggest you do this in a lab first.






share|improve this answer


























  • Yes, but what if the updated was already executed and you are just troubleshooting after the fact? Would you still remove the table from replication and add it back?

    – Derek Czarny
    Jun 8 '16 at 13:37











  • @Hpk IMO it may work when you have small table but still have to test it though. What if we have very large table. Re-snapshot would be pain in that case.

    – SQLPRODDBA
    Jun 8 '16 at 13:40











  • @DerekCzarny - Its an alternative. In my environment unless I would have to prove I tested my method and prove data is intact to my Manager, it would also need to go through CAB etc due to the non standard process. I guess its up to your environment. My method is a sure fire way no questions asked that the publisher matches the subscriber. In terms of table size 33 million records is not large unless its very wide or contains large data types.

    – Hpk
    Jun 8 '16 at 14:45











  • If you used distribution.dbo.msRepl_transactions and distribution.dbo.msRepl_commands to get the update commands, restrict down to the applicable article(s) then your method looks ok. xact_seqno is sequential and is the right thing to order by.

    – Hpk
    Jun 8 '16 at 14:59













Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f140719%2fsql-server-replication%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














An alternative would to remove the article from the publication and re add, causing a new snapshot to be taken of the table. This way there's no question whether the right data is in the subscriber.



To do this you need to make sure the publication properties are set to FALSE for allow_anonymous and immediate_sync - if either of these are true then the a database snapshot will be taken when you remove and re add table rather than just the table.



To check run sp_helppublication against the distribution.



To change if either of these are set to 1:



EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication_name
@property = 'allow_anonymous' ,
@value = 'false'
GO
EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication name
@property = 'immediate_sync' ,
@value = 'false'
GO


I have tested this and it works in my environment, however strongly suggest you do this in a lab first.






share|improve this answer


























  • Yes, but what if the updated was already executed and you are just troubleshooting after the fact? Would you still remove the table from replication and add it back?

    – Derek Czarny
    Jun 8 '16 at 13:37











  • @Hpk IMO it may work when you have small table but still have to test it though. What if we have very large table. Re-snapshot would be pain in that case.

    – SQLPRODDBA
    Jun 8 '16 at 13:40











  • @DerekCzarny - Its an alternative. In my environment unless I would have to prove I tested my method and prove data is intact to my Manager, it would also need to go through CAB etc due to the non standard process. I guess its up to your environment. My method is a sure fire way no questions asked that the publisher matches the subscriber. In terms of table size 33 million records is not large unless its very wide or contains large data types.

    – Hpk
    Jun 8 '16 at 14:45











  • If you used distribution.dbo.msRepl_transactions and distribution.dbo.msRepl_commands to get the update commands, restrict down to the applicable article(s) then your method looks ok. xact_seqno is sequential and is the right thing to order by.

    – Hpk
    Jun 8 '16 at 14:59


















0














An alternative would to remove the article from the publication and re add, causing a new snapshot to be taken of the table. This way there's no question whether the right data is in the subscriber.



To do this you need to make sure the publication properties are set to FALSE for allow_anonymous and immediate_sync - if either of these are true then the a database snapshot will be taken when you remove and re add table rather than just the table.



To check run sp_helppublication against the distribution.



To change if either of these are set to 1:



EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication_name
@property = 'allow_anonymous' ,
@value = 'false'
GO
EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication name
@property = 'immediate_sync' ,
@value = 'false'
GO


I have tested this and it works in my environment, however strongly suggest you do this in a lab first.






share|improve this answer


























  • Yes, but what if the updated was already executed and you are just troubleshooting after the fact? Would you still remove the table from replication and add it back?

    – Derek Czarny
    Jun 8 '16 at 13:37











  • @Hpk IMO it may work when you have small table but still have to test it though. What if we have very large table. Re-snapshot would be pain in that case.

    – SQLPRODDBA
    Jun 8 '16 at 13:40











  • @DerekCzarny - Its an alternative. In my environment unless I would have to prove I tested my method and prove data is intact to my Manager, it would also need to go through CAB etc due to the non standard process. I guess its up to your environment. My method is a sure fire way no questions asked that the publisher matches the subscriber. In terms of table size 33 million records is not large unless its very wide or contains large data types.

    – Hpk
    Jun 8 '16 at 14:45











  • If you used distribution.dbo.msRepl_transactions and distribution.dbo.msRepl_commands to get the update commands, restrict down to the applicable article(s) then your method looks ok. xact_seqno is sequential and is the right thing to order by.

    – Hpk
    Jun 8 '16 at 14:59
















0












0








0







An alternative would to remove the article from the publication and re add, causing a new snapshot to be taken of the table. This way there's no question whether the right data is in the subscriber.



To do this you need to make sure the publication properties are set to FALSE for allow_anonymous and immediate_sync - if either of these are true then the a database snapshot will be taken when you remove and re add table rather than just the table.



To check run sp_helppublication against the distribution.



To change if either of these are set to 1:



EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication_name
@property = 'allow_anonymous' ,
@value = 'false'
GO
EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication name
@property = 'immediate_sync' ,
@value = 'false'
GO


I have tested this and it works in my environment, however strongly suggest you do this in a lab first.






share|improve this answer















An alternative would to remove the article from the publication and re add, causing a new snapshot to be taken of the table. This way there's no question whether the right data is in the subscriber.



To do this you need to make sure the publication properties are set to FALSE for allow_anonymous and immediate_sync - if either of these are true then the a database snapshot will be taken when you remove and re add table rather than just the table.



To check run sp_helppublication against the distribution.



To change if either of these are set to 1:



EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication_name
@property = 'allow_anonymous' ,
@value = 'false'
GO
EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication name
@property = 'immediate_sync' ,
@value = 'false'
GO


I have tested this and it works in my environment, however strongly suggest you do this in a lab first.







share|improve this answer














share|improve this answer



share|improve this answer








edited Jun 8 '16 at 13:57









SQLPRODDBA

1,09511430




1,09511430










answered Jun 8 '16 at 13:27









HpkHpk

8910




8910













  • Yes, but what if the updated was already executed and you are just troubleshooting after the fact? Would you still remove the table from replication and add it back?

    – Derek Czarny
    Jun 8 '16 at 13:37











  • @Hpk IMO it may work when you have small table but still have to test it though. What if we have very large table. Re-snapshot would be pain in that case.

    – SQLPRODDBA
    Jun 8 '16 at 13:40











  • @DerekCzarny - Its an alternative. In my environment unless I would have to prove I tested my method and prove data is intact to my Manager, it would also need to go through CAB etc due to the non standard process. I guess its up to your environment. My method is a sure fire way no questions asked that the publisher matches the subscriber. In terms of table size 33 million records is not large unless its very wide or contains large data types.

    – Hpk
    Jun 8 '16 at 14:45











  • If you used distribution.dbo.msRepl_transactions and distribution.dbo.msRepl_commands to get the update commands, restrict down to the applicable article(s) then your method looks ok. xact_seqno is sequential and is the right thing to order by.

    – Hpk
    Jun 8 '16 at 14:59





















  • Yes, but what if the updated was already executed and you are just troubleshooting after the fact? Would you still remove the table from replication and add it back?

    – Derek Czarny
    Jun 8 '16 at 13:37











  • @Hpk IMO it may work when you have small table but still have to test it though. What if we have very large table. Re-snapshot would be pain in that case.

    – SQLPRODDBA
    Jun 8 '16 at 13:40











  • @DerekCzarny - Its an alternative. In my environment unless I would have to prove I tested my method and prove data is intact to my Manager, it would also need to go through CAB etc due to the non standard process. I guess its up to your environment. My method is a sure fire way no questions asked that the publisher matches the subscriber. In terms of table size 33 million records is not large unless its very wide or contains large data types.

    – Hpk
    Jun 8 '16 at 14:45











  • If you used distribution.dbo.msRepl_transactions and distribution.dbo.msRepl_commands to get the update commands, restrict down to the applicable article(s) then your method looks ok. xact_seqno is sequential and is the right thing to order by.

    – Hpk
    Jun 8 '16 at 14:59



















Yes, but what if the updated was already executed and you are just troubleshooting after the fact? Would you still remove the table from replication and add it back?

– Derek Czarny
Jun 8 '16 at 13:37





Yes, but what if the updated was already executed and you are just troubleshooting after the fact? Would you still remove the table from replication and add it back?

– Derek Czarny
Jun 8 '16 at 13:37













@Hpk IMO it may work when you have small table but still have to test it though. What if we have very large table. Re-snapshot would be pain in that case.

– SQLPRODDBA
Jun 8 '16 at 13:40





@Hpk IMO it may work when you have small table but still have to test it though. What if we have very large table. Re-snapshot would be pain in that case.

– SQLPRODDBA
Jun 8 '16 at 13:40













@DerekCzarny - Its an alternative. In my environment unless I would have to prove I tested my method and prove data is intact to my Manager, it would also need to go through CAB etc due to the non standard process. I guess its up to your environment. My method is a sure fire way no questions asked that the publisher matches the subscriber. In terms of table size 33 million records is not large unless its very wide or contains large data types.

– Hpk
Jun 8 '16 at 14:45





@DerekCzarny - Its an alternative. In my environment unless I would have to prove I tested my method and prove data is intact to my Manager, it would also need to go through CAB etc due to the non standard process. I guess its up to your environment. My method is a sure fire way no questions asked that the publisher matches the subscriber. In terms of table size 33 million records is not large unless its very wide or contains large data types.

– Hpk
Jun 8 '16 at 14:45













If you used distribution.dbo.msRepl_transactions and distribution.dbo.msRepl_commands to get the update commands, restrict down to the applicable article(s) then your method looks ok. xact_seqno is sequential and is the right thing to order by.

– Hpk
Jun 8 '16 at 14:59







If you used distribution.dbo.msRepl_transactions and distribution.dbo.msRepl_commands to get the update commands, restrict down to the applicable article(s) then your method looks ok. xact_seqno is sequential and is the right thing to order by.

– Hpk
Jun 8 '16 at 14:59




















draft saved

draft discarded




















































Thanks for contributing an answer to Database Administrators Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f140719%2fsql-server-replication%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

SQL Server 17 - Attemping to backup to remote NAS but Access is denied

Always On Availability groups resolving state after failover - Remote harden of transaction...

Restoring from pg_dump with foreign key constraints