Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The table below is a listing of various Hive settings that can be used throughout this journey into Hive Optimization

 

 
ApplicationKeyVersionDetailsReference
Skew Merge Bucket Joins
set hive.optimize.skewjoin = true; 
set hive.skewjoin.key = skew_key_threshold
 

Join bottlenecked on the reducer who gets the
skewed key

Hadoop Summit Preso

Sort Merge Bucket Map Join
set hive.optimize.bucketmapjoin = true; 
set hive.optimize.bucketmapjoin.sortedmerge = true;
set hive.input.format=org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 

Why:
No limit on file/partition/table size.

1. Work together with bucket map join
2. Bucket columns == Join columns == sort columns

Hadoop Summit Preso 

Bucket Map Join
set hive.optimize.bucketmapjoin = true;
 

Why:
Total table/partition size is big, not good for mapjoin.


1. Work together with map join
2. All join tables are bucketized, and each small table’s bucket number can be divided by
big table’s bucket number.
3. Bucket columns == Join columns

 
Prevent MapJoins for Large Tables
set hive.auto.convert.join=false;
  

Hive Join Optimization Doc

Controlling the CombinedHiveInputFormat Size

set mapred.max.split.size=268435456;
Hadoop v1.x  
set mapred.min.split.size=
  
set mapreduce.input.fileinputformat.split.maxsize=
Hadoop v2.x  
set mapreduce.input.fileinputformat.split.minsize=
  
 
set hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
0.10Use with above split size settings to help build larger processing blocks. 
Dynamic Partition Creation
set hive.exec.dynamic.partition.mode=nonstrict;
set hive.exec.max.dynamic.partitions=10000;
set hive.exec.max.dynamic.partitions.pernode=500;
   
 
set hive.auto.convert.join.noconditionaltask = true;
set hive.auto.convert.join.noconditionaltask.size = 10000;
  

Hive Join Optimization Doc 

Control the Output Compression
set hive.exec.compress.output=true
   
Control the Interme
set hive.exec.compress.intermediate=true
   
 
set io.sort.mb=400 
 The total amount of buffer memory to use while sorting files, in megabytes. By default, gives each merge stream 1MB, which should minimize seeks. 
 
set hive.limit.pushdown.memory.usage=0.1f
 
That is used in ORDER BY LIMIT queries for pushing down the LIMIT clause.
select * from table order by key1 limit 10;
would use 10% of memory to store a Top-K, which would mean that the impl will keep an in-memory ordered list of 10 rows & replace/discard rows which fall outside the top-10.
This means Top-K is worst case of O(log(K)*n), while the unoptimized sorter+LIMIT is O(log(n)*n), which is a few magnitudes of performance gains when K is something like 100 and n is ~1+ million.

 
 set hive.optimize.correlation=true;0.12 Correlation Optimizer
Vectorization    
 
set hive.vectorized.execution.enabled=true;
0.13 (Requires ORC Formatted Datasets)Vectorized query execution is a Hive feature that greatly reduces the CPU usage for typical query operations like scans, filters, aggregates, and joins. A standard query execution system processes one row at a time. This involves long code paths and significant metadata interpretation in the inner loop of execution. Vectorized query execution streamlines operations by processing a block of 1024 rows at a time. Within the block, each column is stored as a vector (an array of a primitive data type). Simple operations like arithmetic and comparisons are done by quickly iterating through the vectors in a tight loop, with no or very few function calls or conditional branches inside the loop. These loops compile in a streamlined way that uses relatively few instructions and finishes each instruction in fewer clock cycles, on average, by effectively using the processor pipeline and cache memory.

Vectorization Design

Vectorization Execution

 
set hive.mapjoin.lazy.hashtable=false;
 
The issue is with a single reducer vertex running both Shuffle and Map Joins in the same vertex.
As a step forward, you should try disabling the lazy hashtable implementation 
 
Hive Security    
 
hive.security.authorization.enabled=[true|false]
   
 
hive.security.authorization.manager
  • org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider
  • org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactory
   
 
hive.server2.enable.doAs
  • false (when hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactory)
 

When doAs is false, HS2 does NOT impersonate you on the cluster when running jobs, or accessing files on HDFS. Files created in Hive will be owned by Hive and not the user that ran the SQL. That's because the process is run as the "hive" user on the cluster. Hence, all files/directories created as owned by "hive".

This can cause considerable confusion for those that use both the "Hive CLI" and "HS2" to access the cluster. SQL issued via "hive cli" is run as the $USER. The underlying "authorization manager" is most likely "Storage Based". If the user is NOT a member of the "hdfs" group, they may not be able to access the underlying files.

This happens in reverse as well. If a table is created with the "hive cli", it's contents are owned by the user. Accessing the table via HS2 when in "mr" execution mode, will return permissions errors. When running "tez", the Tez Task will simply fail with no indication that it was permissions related (as of Hive 0.13.1)

 
 
hive.security.authenticator.manager
  • org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator
   
 
hive.security.metastore.authorization.manager
  • org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider
   
Tez

Configuration Property Listing

Running Hive on Tez

   
 
set hive.optimize.ppd=[true,false]
   
 
set hive.execution.engine=[tez,mr]
0.13  
 
set hive.optimize.index.filter=true;
 Setting this or the next value to false, will allow you to avoid HIVE-7147, but will cost in performance. 
 
set hive.optimize.ppd.storage=true;
 See Above. 
 
set tez.queue.name=myqueue
0.13Establish the Queue, other than the "default" to run the Tez containers in. 
 
set tez.am.container.session.delay-allocation-millis=1000
0.13The tez setting for multi-tenancy,
which means that a container idling for more than 1 second will be killed.
This is ideal for re-use within a query, but will free up resources between queries. This has been fairly good for multi-tenancy and keeps reuse working within long-running queries (http://people.apache.org/~gopalv/query10.svg).
But for a single query perf run, you can set this to 2 minutes for most queries to reuse containers from a previous query in the same session.
 
 0.13 - Note: This feature doesn't work too well yet

 

Code Block
<property>
  <name>hive.prewarm.enabled</name>
 <value>false</value>
 <description>Enables container prewarm for tez (hadoop 2 only)</description>
</property>

 

 

0.13

With the addition of TEZ-766, Tez supports pre-warmed containers within the Tez session.

Allow hive users to enable and use this feature from within the hive shell.

JIRA
Hive Stats
The items below will require the stats for each table.  Set the stats and 'analyze' the table/partitions as directed.
set hive.stats.autogather=true;
set hive.stats.dbclass=fs;
analyze table store_sales partition(ss_sold_date) compute statistics partialscan;
0.13  
 

set hive.stats.autographer = [true, **false**]

0.13

While data is inserted 
 
set hive.compute.query.using.stats=true;
0.13

This optimizes "select count(1) from foo;" to run in ~1 second.

 
 set hive.fetch.task.conversion=more;0.13This optimizes "select x from foo limit 10;" to run <1 second. 
 
set hive.optimize.index.filter=true;
0.13This optimizes "select x from foo where y = 10;" on ORC tables. 
Hive CBO    
 hive.compute.query.using.stats = [true, **false**];0.13  
 hive.stats.fetch.column.stats = [true, **false**];0.13  
 hive.stats.fetch.partition.stats = [true, **false**];0.13  
 hive.cbo.enable = [true, **false**];0.13  
Hive Tuning    
 
hive.optimize.sort.dynamic.partition = [ **true**, false ]
  HIVE-6455
Hive Server 2 0.13  
 
hive.execution.engine
This setting determines whether Hive queries will be executed using Tez or MapReduce.If this value is set to "mr", Hive queries will be executed using MapReduce. If this value is set to "tez", Hive queries will be executed using Tez. All queries executed through HiveServer2 will use the specifiedhive.execution.engine setting. 
 
hive.tez.container.size
The memory (in MB) to be used for Tez tasks. If this is not specified (-1), the memory settings from the MapReduce configurations (mapreduce.map.memory.mb)will be used by default for map tasks.-1(not specified) If this is not specified, the memory settings from the MapReduce configurations (mapreduce.map.memory.mb)will be used by default. 
 
hive.tez.java.opts
Java command line options for Tez. If this is not specified, the MapReduce java opts settings (mapreduce.map.java.opts) will be used by default for map tasks.If this is not specified, the MapReduce java opts settings (mapreduce.map.java.opts) will be used by default. 
 
hive.server2.tez.default.queues
A comma-separated list of queues configured for the cluster.The default value is an empty string, which prevents execution of all queries. To enable query execution with Tez for HiveServer2, this parameter must configured. 
 
hive.server2.tez.sessions.per.default.queue
The number of sessions for each queue named in the hive.server2.tez.default.queues.1 Larger clusters may improve performance of HiveServer2 by increasing this number. 
 
hive.server2.tez.initialize.default.sessions
Enables a user to use HiveServer2 without enabling Tez for HiveServer2. Users may potentially may want to run queries with Tez without a pool of sessions.false 
 
hive.server2.enable.doAs
Required when the queue-related configurations above are used.false 
 
hive.prewarm.numcontainers
For each AM created during prewarm-up, create this many containers for each.  
ORC File Tuning    
 
hive.exec.orc.memory.pool
0.5f

Maximum fraction of heap that can be used by ORC file writers.

Can effect how "stripes" are written and effect the stripe size.

 
ORC File Versioning    
 
set hive.exec.orc.write.format="0.11"
 Set the specific ORC file version to write. 

...