Partial copy of huge database












1















I have started working in a project which has a big database, around 300GB. For security reasons I can not access database from my local web app. So I need to copy last 100,000 from each table.



To copy from a table to another, I know I can do:



INSERT INTO table2
SELECT * FROM table1
WHERE condition;


But how I can handle connecting to the other database?



One idea I have, it is to create a table, same structure, and use query above to move records and then dump those tables.



Is there a better way?










share|improve this question














bumped to the homepage by Community 7 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.




















    1















    I have started working in a project which has a big database, around 300GB. For security reasons I can not access database from my local web app. So I need to copy last 100,000 from each table.



    To copy from a table to another, I know I can do:



    INSERT INTO table2
    SELECT * FROM table1
    WHERE condition;


    But how I can handle connecting to the other database?



    One idea I have, it is to create a table, same structure, and use query above to move records and then dump those tables.



    Is there a better way?










    share|improve this question














    bumped to the homepage by Community 7 mins ago


    This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.


















      1












      1








      1








      I have started working in a project which has a big database, around 300GB. For security reasons I can not access database from my local web app. So I need to copy last 100,000 from each table.



      To copy from a table to another, I know I can do:



      INSERT INTO table2
      SELECT * FROM table1
      WHERE condition;


      But how I can handle connecting to the other database?



      One idea I have, it is to create a table, same structure, and use query above to move records and then dump those tables.



      Is there a better way?










      share|improve this question














      I have started working in a project which has a big database, around 300GB. For security reasons I can not access database from my local web app. So I need to copy last 100,000 from each table.



      To copy from a table to another, I know I can do:



      INSERT INTO table2
      SELECT * FROM table1
      WHERE condition;


      But how I can handle connecting to the other database?



      One idea I have, it is to create a table, same structure, and use query above to move records and then dump those tables.



      Is there a better way?







      mysql migration export






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Feb 5 '18 at 12:55









      EduardoEduardo

      1216




      1216





      bumped to the homepage by Community 7 mins ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







      bumped to the homepage by Community 7 mins ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
























          2 Answers
          2






          active

          oldest

          votes


















          0














          Copy the create script for those tables an copy onlyt the rows that you want



          mysqldump --opt --user=username --password=password database table --limit 1000 > file.sql


          At the new server create the table and charge



          mysqldump --opt --user=username --password=password database new_table --limit 1000 < file.sql



          I think it's the best way for your






          share|improve this answer































            0














            mysqldump -h source_server... --order-by-primary --where='id > ...' src_db, tbl  |
            mysql -h dest_server ...


            But that assumes





            • id is the PRIMARY KEY

            • You can get the id 100K (or so) rows from the end: SELECT id FROM tbl ORDER BY id DESC LIMIT 100000, 1

            • You can access both servers from wherever you run the pipeline.


            Since you need 2 connections, there is no 'simple' way to do it from the mysql commandline tool or from client code (PHP, Java, etc). Copying one row at a time would quite slow. Employing LOAD DATA will be no better than mysqldump + mysql.






            share|improve this answer























              Your Answer








              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "182"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f197081%2fpartial-copy-of-huge-database%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              0














              Copy the create script for those tables an copy onlyt the rows that you want



              mysqldump --opt --user=username --password=password database table --limit 1000 > file.sql


              At the new server create the table and charge



              mysqldump --opt --user=username --password=password database new_table --limit 1000 < file.sql



              I think it's the best way for your






              share|improve this answer




























                0














                Copy the create script for those tables an copy onlyt the rows that you want



                mysqldump --opt --user=username --password=password database table --limit 1000 > file.sql


                At the new server create the table and charge



                mysqldump --opt --user=username --password=password database new_table --limit 1000 < file.sql



                I think it's the best way for your






                share|improve this answer


























                  0












                  0








                  0







                  Copy the create script for those tables an copy onlyt the rows that you want



                  mysqldump --opt --user=username --password=password database table --limit 1000 > file.sql


                  At the new server create the table and charge



                  mysqldump --opt --user=username --password=password database new_table --limit 1000 < file.sql



                  I think it's the best way for your






                  share|improve this answer













                  Copy the create script for those tables an copy onlyt the rows that you want



                  mysqldump --opt --user=username --password=password database table --limit 1000 > file.sql


                  At the new server create the table and charge



                  mysqldump --opt --user=username --password=password database new_table --limit 1000 < file.sql



                  I think it's the best way for your







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Feb 5 '18 at 13:49









                  KrismorteKrismorte

                  399112




                  399112

























                      0














                      mysqldump -h source_server... --order-by-primary --where='id > ...' src_db, tbl  |
                      mysql -h dest_server ...


                      But that assumes





                      • id is the PRIMARY KEY

                      • You can get the id 100K (or so) rows from the end: SELECT id FROM tbl ORDER BY id DESC LIMIT 100000, 1

                      • You can access both servers from wherever you run the pipeline.


                      Since you need 2 connections, there is no 'simple' way to do it from the mysql commandline tool or from client code (PHP, Java, etc). Copying one row at a time would quite slow. Employing LOAD DATA will be no better than mysqldump + mysql.






                      share|improve this answer




























                        0














                        mysqldump -h source_server... --order-by-primary --where='id > ...' src_db, tbl  |
                        mysql -h dest_server ...


                        But that assumes





                        • id is the PRIMARY KEY

                        • You can get the id 100K (or so) rows from the end: SELECT id FROM tbl ORDER BY id DESC LIMIT 100000, 1

                        • You can access both servers from wherever you run the pipeline.


                        Since you need 2 connections, there is no 'simple' way to do it from the mysql commandline tool or from client code (PHP, Java, etc). Copying one row at a time would quite slow. Employing LOAD DATA will be no better than mysqldump + mysql.






                        share|improve this answer


























                          0












                          0








                          0







                          mysqldump -h source_server... --order-by-primary --where='id > ...' src_db, tbl  |
                          mysql -h dest_server ...


                          But that assumes





                          • id is the PRIMARY KEY

                          • You can get the id 100K (or so) rows from the end: SELECT id FROM tbl ORDER BY id DESC LIMIT 100000, 1

                          • You can access both servers from wherever you run the pipeline.


                          Since you need 2 connections, there is no 'simple' way to do it from the mysql commandline tool or from client code (PHP, Java, etc). Copying one row at a time would quite slow. Employing LOAD DATA will be no better than mysqldump + mysql.






                          share|improve this answer













                          mysqldump -h source_server... --order-by-primary --where='id > ...' src_db, tbl  |
                          mysql -h dest_server ...


                          But that assumes





                          • id is the PRIMARY KEY

                          • You can get the id 100K (or so) rows from the end: SELECT id FROM tbl ORDER BY id DESC LIMIT 100000, 1

                          • You can access both servers from wherever you run the pipeline.


                          Since you need 2 connections, there is no 'simple' way to do it from the mysql commandline tool or from client code (PHP, Java, etc). Copying one row at a time would quite slow. Employing LOAD DATA will be no better than mysqldump + mysql.







                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered Feb 12 '18 at 21:30









                          Rick JamesRick James

                          42.6k22258




                          42.6k22258






























                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Database Administrators Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f197081%2fpartial-copy-of-huge-database%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              SQL Server 17 - Attemping to backup to remote NAS but Access is denied

                              Always On Availability groups resolving state after failover - Remote harden of transaction...

                              Restoring from pg_dump with foreign key constraints