CDH常用调优参数

–规划—减少yarn内存
–yarn占据146G,solr两个实例,各16GB;hbase40GB;

–每个节点上5个container

–yarn
yarn.nodemanager.resource.memory-mb=146GB
yarn.nodemanager.resource.cpu-vcores=32
yarn.scheduler.maximum-allocation-mb=20GB

yarn.nodemanager.resource.memory-mb=108GB
yarn.nodemanager.resource.cpu-vcores=24
yarn.scheduler.maximum-allocation-mb=20GB

–mapreduce
yarn.app.mapreduce.am.resource.mb=10GB
mapreduce.map.memory.mb=10GB
mapreduce.reduce.memory.mb=10GB

–hive
spark.executor.cores=4
spark.executor.memory=16 G
spark.executor.memoryOverhead=2 G
spark.driver.memory=10.5GB
spark.yarn.driver.memoryOverhead=1.5gb

–hive server2
hive metate server heap= 16gb
hiveserver2 heap =16gb

–zookeeper
maxClientCnxns=300
SERVER JAVA heapszie=8GB

–spark和spark2
spark.authentication = spark

–hdfs
dfs.namenode.handler.count=10 –等于数据盘的数量
dfs.datanode.sync.behind.writes=true
dfs.datanode.max.transfer.threads=8192
namenode heap size=16GB
–hbase
hbase.hstore.compactionThreshold=5
HBase RegionServer java heap = 32G

发表评论

电子邮件地址不会被公开。 必填项已用*标注