Excessive RAM consumption in MySQL Cluster
at this moment I have configured a cluster that has 5 nodes:
2 data nodes
2 SQL nodes
1 node manager
My file.config.ini is the following:
[ndbd default]
NoOfReplicas = 2 # Number of replicas
DataMemory = 256M IndexMemory = 100M
ServerPort = 2202.
MaxNoOfConcurrentTransactions = 429496
MaxNoOfConcurrentOperations = 472496
MaxNoOfLocalOperations = 519745
[ndb_mgmd]
NodeId = 1
HostName = 192.168.10.145 # Hostname or IP address of MGM node
DataDir = / var / lib / mysql-cluster # Directory for MGM node log files
[ndbd]
HostName = 192.168.10.181 # Hostname or IP address
NodeId = 2 # Node ID for this data node
DataDir = / mnt / dataPartition / mysql / data # Directory for this data node's data files
[ndbd]
HostName = 192.168.10.183 # Hostname or IP address
NodeId = 3 # Node ID for this data node
DataDir = / mnt / dataPartition / mysql / data # Directory for this data node's data files
[mysqld]
HostName = 192.168.10.140 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
[mysqld]
HostName = 192.168.10.184 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
[mysqld]
IN THE NDB_MGM REPORT, IT SAYS THE FOLLOWING:
ndb_mgm> 2 REPORT MEMORYUSAGE
Node 2: Data usage is 18% (1476 32K pages of total 8192)
Node 2: Index usage is 11% (1461 8K pages of total 12832)
ndb_mgm> 3 REPORT MEMORYUSAGE
Node 3: Data usage is 18% (1480 32K pages of total 8192)
Node 3: Index usage is 11% (1462 8K pages of total 12832)
Each data node has 2gb of ram, it is assumed that with this configuration should not consume more than 500 mb of ram the ndbd process, but I am consuming 1.89 gb of ram per node, I am using disk storage for the fields that they are not indexed
What is wrong or what is wrong?
mysql mysql-cluster ndbcluster
bumped to the homepage by Community♦ 15 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
at this moment I have configured a cluster that has 5 nodes:
2 data nodes
2 SQL nodes
1 node manager
My file.config.ini is the following:
[ndbd default]
NoOfReplicas = 2 # Number of replicas
DataMemory = 256M IndexMemory = 100M
ServerPort = 2202.
MaxNoOfConcurrentTransactions = 429496
MaxNoOfConcurrentOperations = 472496
MaxNoOfLocalOperations = 519745
[ndb_mgmd]
NodeId = 1
HostName = 192.168.10.145 # Hostname or IP address of MGM node
DataDir = / var / lib / mysql-cluster # Directory for MGM node log files
[ndbd]
HostName = 192.168.10.181 # Hostname or IP address
NodeId = 2 # Node ID for this data node
DataDir = / mnt / dataPartition / mysql / data # Directory for this data node's data files
[ndbd]
HostName = 192.168.10.183 # Hostname or IP address
NodeId = 3 # Node ID for this data node
DataDir = / mnt / dataPartition / mysql / data # Directory for this data node's data files
[mysqld]
HostName = 192.168.10.140 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
[mysqld]
HostName = 192.168.10.184 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
[mysqld]
IN THE NDB_MGM REPORT, IT SAYS THE FOLLOWING:
ndb_mgm> 2 REPORT MEMORYUSAGE
Node 2: Data usage is 18% (1476 32K pages of total 8192)
Node 2: Index usage is 11% (1461 8K pages of total 12832)
ndb_mgm> 3 REPORT MEMORYUSAGE
Node 3: Data usage is 18% (1480 32K pages of total 8192)
Node 3: Index usage is 11% (1462 8K pages of total 12832)
Each data node has 2gb of ram, it is assumed that with this configuration should not consume more than 500 mb of ram the ndbd process, but I am consuming 1.89 gb of ram per node, I am using disk storage for the fields that they are not indexed
What is wrong or what is wrong?
mysql mysql-cluster ndbcluster
bumped to the homepage by Community♦ 15 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
How did you compute the "500"?
– Rick James
Nov 18 '17 at 0:39
add a comment |
at this moment I have configured a cluster that has 5 nodes:
2 data nodes
2 SQL nodes
1 node manager
My file.config.ini is the following:
[ndbd default]
NoOfReplicas = 2 # Number of replicas
DataMemory = 256M IndexMemory = 100M
ServerPort = 2202.
MaxNoOfConcurrentTransactions = 429496
MaxNoOfConcurrentOperations = 472496
MaxNoOfLocalOperations = 519745
[ndb_mgmd]
NodeId = 1
HostName = 192.168.10.145 # Hostname or IP address of MGM node
DataDir = / var / lib / mysql-cluster # Directory for MGM node log files
[ndbd]
HostName = 192.168.10.181 # Hostname or IP address
NodeId = 2 # Node ID for this data node
DataDir = / mnt / dataPartition / mysql / data # Directory for this data node's data files
[ndbd]
HostName = 192.168.10.183 # Hostname or IP address
NodeId = 3 # Node ID for this data node
DataDir = / mnt / dataPartition / mysql / data # Directory for this data node's data files
[mysqld]
HostName = 192.168.10.140 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
[mysqld]
HostName = 192.168.10.184 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
[mysqld]
IN THE NDB_MGM REPORT, IT SAYS THE FOLLOWING:
ndb_mgm> 2 REPORT MEMORYUSAGE
Node 2: Data usage is 18% (1476 32K pages of total 8192)
Node 2: Index usage is 11% (1461 8K pages of total 12832)
ndb_mgm> 3 REPORT MEMORYUSAGE
Node 3: Data usage is 18% (1480 32K pages of total 8192)
Node 3: Index usage is 11% (1462 8K pages of total 12832)
Each data node has 2gb of ram, it is assumed that with this configuration should not consume more than 500 mb of ram the ndbd process, but I am consuming 1.89 gb of ram per node, I am using disk storage for the fields that they are not indexed
What is wrong or what is wrong?
mysql mysql-cluster ndbcluster
at this moment I have configured a cluster that has 5 nodes:
2 data nodes
2 SQL nodes
1 node manager
My file.config.ini is the following:
[ndbd default]
NoOfReplicas = 2 # Number of replicas
DataMemory = 256M IndexMemory = 100M
ServerPort = 2202.
MaxNoOfConcurrentTransactions = 429496
MaxNoOfConcurrentOperations = 472496
MaxNoOfLocalOperations = 519745
[ndb_mgmd]
NodeId = 1
HostName = 192.168.10.145 # Hostname or IP address of MGM node
DataDir = / var / lib / mysql-cluster # Directory for MGM node log files
[ndbd]
HostName = 192.168.10.181 # Hostname or IP address
NodeId = 2 # Node ID for this data node
DataDir = / mnt / dataPartition / mysql / data # Directory for this data node's data files
[ndbd]
HostName = 192.168.10.183 # Hostname or IP address
NodeId = 3 # Node ID for this data node
DataDir = / mnt / dataPartition / mysql / data # Directory for this data node's data files
[mysqld]
HostName = 192.168.10.140 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
[mysqld]
HostName = 192.168.10.184 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
[mysqld]
IN THE NDB_MGM REPORT, IT SAYS THE FOLLOWING:
ndb_mgm> 2 REPORT MEMORYUSAGE
Node 2: Data usage is 18% (1476 32K pages of total 8192)
Node 2: Index usage is 11% (1461 8K pages of total 12832)
ndb_mgm> 3 REPORT MEMORYUSAGE
Node 3: Data usage is 18% (1480 32K pages of total 8192)
Node 3: Index usage is 11% (1462 8K pages of total 12832)
Each data node has 2gb of ram, it is assumed that with this configuration should not consume more than 500 mb of ram the ndbd process, but I am consuming 1.89 gb of ram per node, I am using disk storage for the fields that they are not indexed
What is wrong or what is wrong?
mysql mysql-cluster ndbcluster
mysql mysql-cluster ndbcluster
asked Nov 17 '17 at 0:50
Jonathan DiazJonathan Diaz
11
11
bumped to the homepage by Community♦ 15 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 15 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
How did you compute the "500"?
– Rick James
Nov 18 '17 at 0:39
add a comment |
How did you compute the "500"?
– Rick James
Nov 18 '17 at 0:39
How did you compute the "500"?
– Rick James
Nov 18 '17 at 0:39
How did you compute the "500"?
– Rick James
Nov 18 '17 at 0:39
add a comment |
1 Answer
1
active
oldest
votes
The 429496 transaction records consume around 400 MBytes
(around 900 bytes per record).
The operation records consume also around 400 MBytes,
around 300 bytes in transaction coordinator and
500 bytes in ldm thread.
In addition it uses memory for a lot of other buffers such
as REDO buffers, SEND buffers, job buffers. These normally
consume a few hundred MBytes.
In addition you can decrease memory usage by setting
BatchSizePerLocalScan to a smaller value (e.g. 16).
This should save a few hundred MBytes of memory space.
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f191113%2fexcessive-ram-consumption-in-mysql-cluster%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
The 429496 transaction records consume around 400 MBytes
(around 900 bytes per record).
The operation records consume also around 400 MBytes,
around 300 bytes in transaction coordinator and
500 bytes in ldm thread.
In addition it uses memory for a lot of other buffers such
as REDO buffers, SEND buffers, job buffers. These normally
consume a few hundred MBytes.
In addition you can decrease memory usage by setting
BatchSizePerLocalScan to a smaller value (e.g. 16).
This should save a few hundred MBytes of memory space.
add a comment |
The 429496 transaction records consume around 400 MBytes
(around 900 bytes per record).
The operation records consume also around 400 MBytes,
around 300 bytes in transaction coordinator and
500 bytes in ldm thread.
In addition it uses memory for a lot of other buffers such
as REDO buffers, SEND buffers, job buffers. These normally
consume a few hundred MBytes.
In addition you can decrease memory usage by setting
BatchSizePerLocalScan to a smaller value (e.g. 16).
This should save a few hundred MBytes of memory space.
add a comment |
The 429496 transaction records consume around 400 MBytes
(around 900 bytes per record).
The operation records consume also around 400 MBytes,
around 300 bytes in transaction coordinator and
500 bytes in ldm thread.
In addition it uses memory for a lot of other buffers such
as REDO buffers, SEND buffers, job buffers. These normally
consume a few hundred MBytes.
In addition you can decrease memory usage by setting
BatchSizePerLocalScan to a smaller value (e.g. 16).
This should save a few hundred MBytes of memory space.
The 429496 transaction records consume around 400 MBytes
(around 900 bytes per record).
The operation records consume also around 400 MBytes,
around 300 bytes in transaction coordinator and
500 bytes in ldm thread.
In addition it uses memory for a lot of other buffers such
as REDO buffers, SEND buffers, job buffers. These normally
consume a few hundred MBytes.
In addition you can decrease memory usage by setting
BatchSizePerLocalScan to a smaller value (e.g. 16).
This should save a few hundred MBytes of memory space.
answered Nov 29 '17 at 11:54
Mikael RonströmMikael Ronström
1811
1811
add a comment |
add a comment |
Thanks for contributing an answer to Database Administrators Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f191113%2fexcessive-ram-consumption-in-mysql-cluster%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
How did you compute the "500"?
– Rick James
Nov 18 '17 at 0:39