Distributing a single table across multiple disks of different performance?












1















I have a table that increases by several million rows per day. It holds over a year's worth of data.



The table is regularly SELECTed from, but the older the data is the less frequently it is accessed. The current data (i.e. data added in the last hour) is very frequently queried, but as the records get older they are less likely to be queried.



The database is currently on a traditional (magnetic) hard disk. I also have a much smaller solid state disk that is very fast, but not large enough to hold more than few days' worth of data.



I’d like to somehow spread the table across the two disks. I’d like new data to be INSERTed into the small, high performance disk, and then invisibly migrated to the slower but larger disk after a few days (when it’s less likely to be needed).



I know I could split the data into two tables (let's call then TABLE_SLOW and TABLE_FAST). I could insert into TABLE_FAST and then run an archiving process to transfer the oldest records into TABLE_SLOW every night, and then create a view based on a union of TABLE_SLOW and TABLE_FAST to use for queries – but I’m looking for a solution that wouldn’t require any changes to the application, only the database.



The data includes a field called ‘loadedTime’ so there’s no problem working out how old the data is.










share|improve this question
















bumped to the homepage by Community 4 hours ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.




















    1















    I have a table that increases by several million rows per day. It holds over a year's worth of data.



    The table is regularly SELECTed from, but the older the data is the less frequently it is accessed. The current data (i.e. data added in the last hour) is very frequently queried, but as the records get older they are less likely to be queried.



    The database is currently on a traditional (magnetic) hard disk. I also have a much smaller solid state disk that is very fast, but not large enough to hold more than few days' worth of data.



    I’d like to somehow spread the table across the two disks. I’d like new data to be INSERTed into the small, high performance disk, and then invisibly migrated to the slower but larger disk after a few days (when it’s less likely to be needed).



    I know I could split the data into two tables (let's call then TABLE_SLOW and TABLE_FAST). I could insert into TABLE_FAST and then run an archiving process to transfer the oldest records into TABLE_SLOW every night, and then create a view based on a union of TABLE_SLOW and TABLE_FAST to use for queries – but I’m looking for a solution that wouldn’t require any changes to the application, only the database.



    The data includes a field called ‘loadedTime’ so there’s no problem working out how old the data is.










    share|improve this question
















    bumped to the homepage by Community 4 hours ago


    This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.


















      1












      1








      1








      I have a table that increases by several million rows per day. It holds over a year's worth of data.



      The table is regularly SELECTed from, but the older the data is the less frequently it is accessed. The current data (i.e. data added in the last hour) is very frequently queried, but as the records get older they are less likely to be queried.



      The database is currently on a traditional (magnetic) hard disk. I also have a much smaller solid state disk that is very fast, but not large enough to hold more than few days' worth of data.



      I’d like to somehow spread the table across the two disks. I’d like new data to be INSERTed into the small, high performance disk, and then invisibly migrated to the slower but larger disk after a few days (when it’s less likely to be needed).



      I know I could split the data into two tables (let's call then TABLE_SLOW and TABLE_FAST). I could insert into TABLE_FAST and then run an archiving process to transfer the oldest records into TABLE_SLOW every night, and then create a view based on a union of TABLE_SLOW and TABLE_FAST to use for queries – but I’m looking for a solution that wouldn’t require any changes to the application, only the database.



      The data includes a field called ‘loadedTime’ so there’s no problem working out how old the data is.










      share|improve this question
















      I have a table that increases by several million rows per day. It holds over a year's worth of data.



      The table is regularly SELECTed from, but the older the data is the less frequently it is accessed. The current data (i.e. data added in the last hour) is very frequently queried, but as the records get older they are less likely to be queried.



      The database is currently on a traditional (magnetic) hard disk. I also have a much smaller solid state disk that is very fast, but not large enough to hold more than few days' worth of data.



      I’d like to somehow spread the table across the two disks. I’d like new data to be INSERTed into the small, high performance disk, and then invisibly migrated to the slower but larger disk after a few days (when it’s less likely to be needed).



      I know I could split the data into two tables (let's call then TABLE_SLOW and TABLE_FAST). I could insert into TABLE_FAST and then run an archiving process to transfer the oldest records into TABLE_SLOW every night, and then create a view based on a union of TABLE_SLOW and TABLE_FAST to use for queries – but I’m looking for a solution that wouldn’t require any changes to the application, only the database.



      The data includes a field called ‘loadedTime’ so there’s no problem working out how old the data is.







      postgresql performance-tuning






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Apr 15 '18 at 12:27









      Michael Green

      14.4k82959




      14.4k82959










      asked Apr 15 '18 at 9:25









      HemelHemel

      21829




      21829





      bumped to the homepage by Community 4 hours ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







      bumped to the homepage by Community 4 hours ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
























          1 Answer
          1






          active

          oldest

          votes


















          0














          How to put data on a different disk: (using tablespaces)



          CREATE TABLESPACE fastspace LOCATION '/srv/fast_disk/postgresql/data';


          https://www.postgresql.org/docs/10/static/manage-ag-tablespaces.html



          .



          How to make two tables behave like one: (SQL inheritance)



          CREATE TABLE TABLE_FAST () inherits TABLE_SLOW TABLESPACE fastspace;


          You will also need to add constraints and indices as apropriate to this new table.



          https://www.postgresql.org/docs/10/static/tutorial-inheritance.html



          Now selects from TABLE_SLOW will also see the data in TABLE_FAST



          you may also benefit from implementing constraint exclusion (also described on that page), but the index on Loaded_Time (which I am assuming you have) should give a good speed increase itself



          .



          How to move data data: (CTE delete returning)



          eg: 3 days old



          with deleted as ( 
          delete from only TABLE_FAST
          where Loaded_Time < 'now()' - '3 days'::interval
          returning * )
          insert into slow_table select * from deleted;





          share|improve this answer


























          • Thanks Jansen, but I'm not sure that's 100% what I want. Unless I've mis-understood your suggestions I would need to manually run a delete/insert process to move data from one table to another, I would also end up with two tables (with different names). I was looking for a method where I could stick with a single named table and no manual processes.

            – Hemel
            Apr 16 '18 at 7:42











          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "182"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f203998%2fdistributing-a-single-table-across-multiple-disks-of-different-performance%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          How to put data on a different disk: (using tablespaces)



          CREATE TABLESPACE fastspace LOCATION '/srv/fast_disk/postgresql/data';


          https://www.postgresql.org/docs/10/static/manage-ag-tablespaces.html



          .



          How to make two tables behave like one: (SQL inheritance)



          CREATE TABLE TABLE_FAST () inherits TABLE_SLOW TABLESPACE fastspace;


          You will also need to add constraints and indices as apropriate to this new table.



          https://www.postgresql.org/docs/10/static/tutorial-inheritance.html



          Now selects from TABLE_SLOW will also see the data in TABLE_FAST



          you may also benefit from implementing constraint exclusion (also described on that page), but the index on Loaded_Time (which I am assuming you have) should give a good speed increase itself



          .



          How to move data data: (CTE delete returning)



          eg: 3 days old



          with deleted as ( 
          delete from only TABLE_FAST
          where Loaded_Time < 'now()' - '3 days'::interval
          returning * )
          insert into slow_table select * from deleted;





          share|improve this answer


























          • Thanks Jansen, but I'm not sure that's 100% what I want. Unless I've mis-understood your suggestions I would need to manually run a delete/insert process to move data from one table to another, I would also end up with two tables (with different names). I was looking for a method where I could stick with a single named table and no manual processes.

            – Hemel
            Apr 16 '18 at 7:42
















          0














          How to put data on a different disk: (using tablespaces)



          CREATE TABLESPACE fastspace LOCATION '/srv/fast_disk/postgresql/data';


          https://www.postgresql.org/docs/10/static/manage-ag-tablespaces.html



          .



          How to make two tables behave like one: (SQL inheritance)



          CREATE TABLE TABLE_FAST () inherits TABLE_SLOW TABLESPACE fastspace;


          You will also need to add constraints and indices as apropriate to this new table.



          https://www.postgresql.org/docs/10/static/tutorial-inheritance.html



          Now selects from TABLE_SLOW will also see the data in TABLE_FAST



          you may also benefit from implementing constraint exclusion (also described on that page), but the index on Loaded_Time (which I am assuming you have) should give a good speed increase itself



          .



          How to move data data: (CTE delete returning)



          eg: 3 days old



          with deleted as ( 
          delete from only TABLE_FAST
          where Loaded_Time < 'now()' - '3 days'::interval
          returning * )
          insert into slow_table select * from deleted;





          share|improve this answer


























          • Thanks Jansen, but I'm not sure that's 100% what I want. Unless I've mis-understood your suggestions I would need to manually run a delete/insert process to move data from one table to another, I would also end up with two tables (with different names). I was looking for a method where I could stick with a single named table and no manual processes.

            – Hemel
            Apr 16 '18 at 7:42














          0












          0








          0







          How to put data on a different disk: (using tablespaces)



          CREATE TABLESPACE fastspace LOCATION '/srv/fast_disk/postgresql/data';


          https://www.postgresql.org/docs/10/static/manage-ag-tablespaces.html



          .



          How to make two tables behave like one: (SQL inheritance)



          CREATE TABLE TABLE_FAST () inherits TABLE_SLOW TABLESPACE fastspace;


          You will also need to add constraints and indices as apropriate to this new table.



          https://www.postgresql.org/docs/10/static/tutorial-inheritance.html



          Now selects from TABLE_SLOW will also see the data in TABLE_FAST



          you may also benefit from implementing constraint exclusion (also described on that page), but the index on Loaded_Time (which I am assuming you have) should give a good speed increase itself



          .



          How to move data data: (CTE delete returning)



          eg: 3 days old



          with deleted as ( 
          delete from only TABLE_FAST
          where Loaded_Time < 'now()' - '3 days'::interval
          returning * )
          insert into slow_table select * from deleted;





          share|improve this answer















          How to put data on a different disk: (using tablespaces)



          CREATE TABLESPACE fastspace LOCATION '/srv/fast_disk/postgresql/data';


          https://www.postgresql.org/docs/10/static/manage-ag-tablespaces.html



          .



          How to make two tables behave like one: (SQL inheritance)



          CREATE TABLE TABLE_FAST () inherits TABLE_SLOW TABLESPACE fastspace;


          You will also need to add constraints and indices as apropriate to this new table.



          https://www.postgresql.org/docs/10/static/tutorial-inheritance.html



          Now selects from TABLE_SLOW will also see the data in TABLE_FAST



          you may also benefit from implementing constraint exclusion (also described on that page), but the index on Loaded_Time (which I am assuming you have) should give a good speed increase itself



          .



          How to move data data: (CTE delete returning)



          eg: 3 days old



          with deleted as ( 
          delete from only TABLE_FAST
          where Loaded_Time < 'now()' - '3 days'::interval
          returning * )
          insert into slow_table select * from deleted;






          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Apr 16 '18 at 5:14

























          answered Apr 16 '18 at 4:54









          JasenJasen

          1,131410




          1,131410













          • Thanks Jansen, but I'm not sure that's 100% what I want. Unless I've mis-understood your suggestions I would need to manually run a delete/insert process to move data from one table to another, I would also end up with two tables (with different names). I was looking for a method where I could stick with a single named table and no manual processes.

            – Hemel
            Apr 16 '18 at 7:42



















          • Thanks Jansen, but I'm not sure that's 100% what I want. Unless I've mis-understood your suggestions I would need to manually run a delete/insert process to move data from one table to another, I would also end up with two tables (with different names). I was looking for a method where I could stick with a single named table and no manual processes.

            – Hemel
            Apr 16 '18 at 7:42

















          Thanks Jansen, but I'm not sure that's 100% what I want. Unless I've mis-understood your suggestions I would need to manually run a delete/insert process to move data from one table to another, I would also end up with two tables (with different names). I was looking for a method where I could stick with a single named table and no manual processes.

          – Hemel
          Apr 16 '18 at 7:42





          Thanks Jansen, but I'm not sure that's 100% what I want. Unless I've mis-understood your suggestions I would need to manually run a delete/insert process to move data from one table to another, I would also end up with two tables (with different names). I was looking for a method where I could stick with a single named table and no manual processes.

          – Hemel
          Apr 16 '18 at 7:42


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Database Administrators Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f203998%2fdistributing-a-single-table-across-multiple-disks-of-different-performance%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          الفوسفات في المغرب

          Four equal circles intersect: What is the area of the small shaded portion and its height

          جامعة ليفربول