Postgres row_to_json precision for numeric type












0















I am using the row_to_json function to convert a table row to a json object.
There are a few columns in the table that are of type numeric which contain float values.



When row_to_json returns the json, values like 9.98 becomes 9.98000000000000043 or 6.30 becomes 6.29999999999999982 in these numeric columns.



Is there a way to impose precision at the json creation function? There are no other data issues in the table which are affecting other modules.
Changing the column type to numeric(12,2) etc. is not possible since the table has over 2 billion rows(the table would be locked for 30+hours) and would affect other production modules.



--update--



I should have made the question a little bit clearer with the use case.



row_to_json is used in a trigger to audit updates in the DB. The json result is stored in another table and this same audit function is used for multiple tables(100+) where there may be other tables with this same issue.



One solution would be to handle it at the application level when retrieving the jsons from the audit table, but I would like to see if there is a function that can handle it in the DB itself.










share|improve this question
















bumped to the homepage by Community 8 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
















  • Using a normal "select row_to_json(table) from table" gives the right value without any loss of precision. But in the trigger(a BEFORE UPDATE trigger), using "row_to_json(NEW)" leads to a loss in precision.

    – Min2j
    Feb 3 '18 at 17:30


















0















I am using the row_to_json function to convert a table row to a json object.
There are a few columns in the table that are of type numeric which contain float values.



When row_to_json returns the json, values like 9.98 becomes 9.98000000000000043 or 6.30 becomes 6.29999999999999982 in these numeric columns.



Is there a way to impose precision at the json creation function? There are no other data issues in the table which are affecting other modules.
Changing the column type to numeric(12,2) etc. is not possible since the table has over 2 billion rows(the table would be locked for 30+hours) and would affect other production modules.



--update--



I should have made the question a little bit clearer with the use case.



row_to_json is used in a trigger to audit updates in the DB. The json result is stored in another table and this same audit function is used for multiple tables(100+) where there may be other tables with this same issue.



One solution would be to handle it at the application level when retrieving the jsons from the audit table, but I would like to see if there is a function that can handle it in the DB itself.










share|improve this question
















bumped to the homepage by Community 8 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
















  • Using a normal "select row_to_json(table) from table" gives the right value without any loss of precision. But in the trigger(a BEFORE UPDATE trigger), using "row_to_json(NEW)" leads to a loss in precision.

    – Min2j
    Feb 3 '18 at 17:30
















0












0








0








I am using the row_to_json function to convert a table row to a json object.
There are a few columns in the table that are of type numeric which contain float values.



When row_to_json returns the json, values like 9.98 becomes 9.98000000000000043 or 6.30 becomes 6.29999999999999982 in these numeric columns.



Is there a way to impose precision at the json creation function? There are no other data issues in the table which are affecting other modules.
Changing the column type to numeric(12,2) etc. is not possible since the table has over 2 billion rows(the table would be locked for 30+hours) and would affect other production modules.



--update--



I should have made the question a little bit clearer with the use case.



row_to_json is used in a trigger to audit updates in the DB. The json result is stored in another table and this same audit function is used for multiple tables(100+) where there may be other tables with this same issue.



One solution would be to handle it at the application level when retrieving the jsons from the audit table, but I would like to see if there is a function that can handle it in the DB itself.










share|improve this question
















I am using the row_to_json function to convert a table row to a json object.
There are a few columns in the table that are of type numeric which contain float values.



When row_to_json returns the json, values like 9.98 becomes 9.98000000000000043 or 6.30 becomes 6.29999999999999982 in these numeric columns.



Is there a way to impose precision at the json creation function? There are no other data issues in the table which are affecting other modules.
Changing the column type to numeric(12,2) etc. is not possible since the table has over 2 billion rows(the table would be locked for 30+hours) and would affect other production modules.



--update--



I should have made the question a little bit clearer with the use case.



row_to_json is used in a trigger to audit updates in the DB. The json result is stored in another table and this same audit function is used for multiple tables(100+) where there may be other tables with this same issue.



One solution would be to handle it at the application level when retrieving the jsons from the audit table, but I would like to see if there is a function that can handle it in the DB itself.







postgresql datatypes json






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jan 24 '18 at 21:22







Min2j

















asked Jan 24 '18 at 12:50









Min2jMin2j

11




11





bumped to the homepage by Community 8 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







bumped to the homepage by Community 8 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.















  • Using a normal "select row_to_json(table) from table" gives the right value without any loss of precision. But in the trigger(a BEFORE UPDATE trigger), using "row_to_json(NEW)" leads to a loss in precision.

    – Min2j
    Feb 3 '18 at 17:30





















  • Using a normal "select row_to_json(table) from table" gives the right value without any loss of precision. But in the trigger(a BEFORE UPDATE trigger), using "row_to_json(NEW)" leads to a loss in precision.

    – Min2j
    Feb 3 '18 at 17:30



















Using a normal "select row_to_json(table) from table" gives the right value without any loss of precision. But in the trigger(a BEFORE UPDATE trigger), using "row_to_json(NEW)" leads to a loss in precision.

– Min2j
Feb 3 '18 at 17:30







Using a normal "select row_to_json(table) from table" gives the right value without any loss of precision. But in the trigger(a BEFORE UPDATE trigger), using "row_to_json(NEW)" leads to a loss in precision.

– Min2j
Feb 3 '18 at 17:30












1 Answer
1






active

oldest

votes


















0














You can round the values:



select row_to_json(t)
from (
select id,
round(value_1, 2) as value_1,
round(value_2, 2) as value_2
from test
) t;


If you need that more often and don't want to type it all the time, create a view that returns the rounded values.






share|improve this answer
























  • Thank you for the response @a_horse_with_no_name . This would work for normal retrieval queries which require JSONs. But for triggers with different tables and columns(column names in each of the tables) with this problem of precision, this issue would not work. I have updated the question to make it clearer.

    – Min2j
    Jan 29 '18 at 18:41













Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f196119%2fpostgres-row-to-json-precision-for-numeric-type%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














You can round the values:



select row_to_json(t)
from (
select id,
round(value_1, 2) as value_1,
round(value_2, 2) as value_2
from test
) t;


If you need that more often and don't want to type it all the time, create a view that returns the rounded values.






share|improve this answer
























  • Thank you for the response @a_horse_with_no_name . This would work for normal retrieval queries which require JSONs. But for triggers with different tables and columns(column names in each of the tables) with this problem of precision, this issue would not work. I have updated the question to make it clearer.

    – Min2j
    Jan 29 '18 at 18:41


















0














You can round the values:



select row_to_json(t)
from (
select id,
round(value_1, 2) as value_1,
round(value_2, 2) as value_2
from test
) t;


If you need that more often and don't want to type it all the time, create a view that returns the rounded values.






share|improve this answer
























  • Thank you for the response @a_horse_with_no_name . This would work for normal retrieval queries which require JSONs. But for triggers with different tables and columns(column names in each of the tables) with this problem of precision, this issue would not work. I have updated the question to make it clearer.

    – Min2j
    Jan 29 '18 at 18:41
















0












0








0







You can round the values:



select row_to_json(t)
from (
select id,
round(value_1, 2) as value_1,
round(value_2, 2) as value_2
from test
) t;


If you need that more often and don't want to type it all the time, create a view that returns the rounded values.






share|improve this answer













You can round the values:



select row_to_json(t)
from (
select id,
round(value_1, 2) as value_1,
round(value_2, 2) as value_2
from test
) t;


If you need that more often and don't want to type it all the time, create a view that returns the rounded values.







share|improve this answer












share|improve this answer



share|improve this answer










answered Jan 24 '18 at 12:53









a_horse_with_no_namea_horse_with_no_name

39.2k775112




39.2k775112













  • Thank you for the response @a_horse_with_no_name . This would work for normal retrieval queries which require JSONs. But for triggers with different tables and columns(column names in each of the tables) with this problem of precision, this issue would not work. I have updated the question to make it clearer.

    – Min2j
    Jan 29 '18 at 18:41





















  • Thank you for the response @a_horse_with_no_name . This would work for normal retrieval queries which require JSONs. But for triggers with different tables and columns(column names in each of the tables) with this problem of precision, this issue would not work. I have updated the question to make it clearer.

    – Min2j
    Jan 29 '18 at 18:41



















Thank you for the response @a_horse_with_no_name . This would work for normal retrieval queries which require JSONs. But for triggers with different tables and columns(column names in each of the tables) with this problem of precision, this issue would not work. I have updated the question to make it clearer.

– Min2j
Jan 29 '18 at 18:41







Thank you for the response @a_horse_with_no_name . This would work for normal retrieval queries which require JSONs. But for triggers with different tables and columns(column names in each of the tables) with this problem of precision, this issue would not work. I have updated the question to make it clearer.

– Min2j
Jan 29 '18 at 18:41




















draft saved

draft discarded




















































Thanks for contributing an answer to Database Administrators Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f196119%2fpostgres-row-to-json-precision-for-numeric-type%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

SQL Server 17 - Attemping to backup to remote NAS but Access is denied

Always On Availability groups resolving state after failover - Remote harden of transaction...

Restoring from pg_dump with foreign key constraints