Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Option

Description

-u <database URL>

The JDBC URL to connect to. Special characters in parameter values should be encoded with URL encoding if needed.

Usage: beeline -u db_URL 

-r

Reconnect to last used URL (if a user has previously used !connect to a URL and used !save to a beeline.properties file).

Usage: beeline -r

Version: 2.1.0 (HIVE-13670)

-n <username>

The username to connect as.

Usage: beeline -n valid_user

-p <password>

The password to connect as.

Usage: beeline -p valid_password

Optional password mode:

Starting Hive 2.2.0 (HIVE-13589) the argument for -p option is optional.

Usage : beeline -p [valid_password]

If the password is not provided after -p Beeline will prompt for the password while initiating the connection. When password is provided Beeline uses it initiate the connection without prompting.

-d <driver class>

The driver class to use.

Usage: beeline -d driver_class

-e <query>

Query that should be executed. Double or single quotes enclose the query string. This option can be specified multiple times.

Usage: beeline -e "query_string"

Support to run multiple SQL statements separated by semicolons in a single query_string: 1.2.0 (HIVE-9877)
Bug fix (null pointer exception): 0.13.0 (HIVE-5765)
Bug fix (--headerInterval not honored): 0.14.0 (HIVE-7647)
Bug fix (running -e in background): 1.3.0 and 2.0.0 (HIVE-6758); workaround available for earlier versions 

-f <file>

Script file that should be executed.

Usage: beeline -f filepath

Version: 0.12.0 (HIVE-4268)
Note: If the script contains tabs, query compilation fails in version 0.12.0. This bug is fixed in version 0.13.0 (HIVE-6359).
Bug fix (running -f in background): 1.3.0 and 2.0.0 (HIVE-6758); workaround available for earlier versions 

-i (or) --init <file or files>

The init files for initialization

Usage: beeline -i /tmp/initfile

Single file:

Version: 0.14.0 (HIVE-6561)

Multiple files:

Version: 2.1.0 (HIVE-11336)

-w (or) --password-file <password file>

The password file to read password from.

Version: 1.2.0 (HIVE-7175)

-a (or) --authType <auth type>

The authentication type passed to the jdbc as an auth property

Version: 0.13.0 (HIVE-5155)

--property-file <file>

File to read configuration properties from

Usage: beeline --property-file /tmp/a

Version: 2.2.0 (HIVE-13964)

--hiveconf property=value

Use value for the given configuration property. Properties that are listed in hive.conf.restricted.list cannot be reset with hiveconf (see Restricted List and Whitelist).

Usage: beeline --hiveconf prop1=value1

Version: 0.13.0 (HIVE-6173)

--hivevar name=value

Hive variable name and value. This is a Hive-specific setting in which variables can be set at the session level and referenced in Hive commands or queries.

Usage: beeline --hivevar var1=value1

--color=[true/false]

Control whether color is used for display. Default is false.

Usage: beeline --color=true

(Not supported for Separated-Value Output formats. See HIVE-9770)

--showHeader=[true/false]

Show column names in query results (true) or not (false). Default is true.

Usage: beeline --showHeader=false

--headerInterval=ROWS

The interval for redisplaying column headers, in number of rows, when outputformat is table. Default is 100.

Usage: beeline --headerInterval=50

(Not supported for Separated-Value Output formats. See HIVE-9770)

--fastConnect=[true/false]

When connecting, skip building a list of all tables and columns for tab-completion of HiveQL statements (true) or build the list (false). Default is true.

Usage: beeline --fastConnect=false

--autoCommit=[true/false]

Enable/disable automatic transaction commit. Default is false.

Usage: beeline --autoCommit=true

--verbose=[true/false]

Show verbose error messages and debug information (true) or do not show (false). Default is false.

Usage: beeline --verbose=true

--showWarnings=[true/false]

Display warnings that are reported on the connection after issuing any HiveQL commands. Default is false.

Usage: beeline --showWarnings=true

--showDbInPrompt=[true/false]

Display the current database name in prompt. Default is false.

Usage: beeline --showDbInPrompt=true

Version: 2.2.0 (HIVE-14123)

--showNestedErrs=[true/false]

Display nested errors. Default is false.

Usage: beeline --showNestedErrs=true

--numberFormat=[pattern]

Format numbers using a DecimalFormat pattern.

Usage: beeline --numberFormat="#,###,##0.00"

--force=[true/false]

Continue running script even after errors (true) or do not continue (false). Default is false.

Usage: beeline--force=true

--maxWidth=MAXWIDTH

The maximum width to display before truncating data, in characters, when outputformat is table. Default is to query the terminal for current width, then fall back to 80.

Usage: beeline --maxWidth=150

--maxColumnWidth=MAXCOLWIDTH

The maximum column width, in characters, when outputformat is table. Default is 50 in Hive version 2.2.0+ (see HIVE-14135) or 15 in earlier versions.

Usage: beeline --maxColumnWidth=25

--silent=[true/false]

Reduce the amount of informational messages displayed (true) or not (false). It also stops displaying the log messages for the query from HiveServer2 (Hive 0.14 and later) and the HiveQL commands (Hive 1.2.0 and later). Default is false.

Usage: beeline --silent=true

--autosave=[true/false]

Automatically save preferences (true) or do not autosave (false). Default is false.

Usage: beeline --autosave=true

--outputformat=[table/vertical/csv/tsv/dsv/csv2/tsv2]

Format mode for result display. Default is table. See 82903124 Separated-Value Output Formats below for description of recommended sv options.

Usage: beeline --outputformat=tsv

Version: dsv/csv2/tsv2 added in 0.14.0 (HIVE-8615)

--truncateTable=[true/false]

If true, truncates table column in the console when it exceeds console length.

Version: 0.14.0 (HIVE-6928)

--delimiterForDSV= DELIMITER

The delimiter for delimiter-separated values output format. Default is '|' character.

Version: 0.14.0 (HIVE-7390)

--isolation=LEVEL

Set the transaction isolation level to TRANSACTION_READ_COMMITTED
or TRANSACTION_SERIALIZABLE.
See the "Field Detail" section in the Java Connection documentation.

Usage: beeline --isolation=TRANSACTION_SERIALIZABLE

--nullemptystring=[true/false]

Use historic behavior of printing null as empty string (true) or use current behavior of printing null as NULL (false). Default is false.

Usage: beeline --nullemptystring=false

Version: 0.13.0 (HIVE-4485)

--incremental=[true/false]

Defaults to true from Hive 2.3 onwards, before it defaulted to false. When set to false, the entire result set is fetched and buffered before being displayed, yielding optimal display column sizing. When set to true, result rows are displayed immediately as they are fetched, yielding lower latency and memory usage at the price of extra display column padding. Setting --incremental=true is recommended if you encounter an OutOfMemory on the client side (due to the fetched result set size being large).

--incrementalBufferRows=NUMROWS

The number of rows to buffer when printing rows on stdout, defaults to 1000; only applicable if --incremental=true and --outputformat=table

Usage: beeline --incrementalBufferRows=1000

Version: 2.3.0 (HIVE-14170)

--maxHistoryRows=NUMROWS

The maximum number of rows to store Beeline history.

Version: 2.3.0 (HIVE-15166)

--delimiter=;

Set the delimiter for queries written in Beeline. Multi-char delimiters are allowed, but quotation marks, slashes, and -- are not allowed. Defaults to ;

Usage: beeline --delimiter=$$

Version: 3.0.0 (HIVE-10865)

--convertBinaryArrayToString=[true/false]

Display binary column data as string or as byte array. 

Usage: beeline --convertBinaryArrayToString=true

a string using the platform's default character set.

The default behavior (false) is to display binary data using: Arrays.toString(byte[] columnValue)

Version: 3.0.0 (HIVE-14786)


Display binary column data as a string using the UTF-8 character set.

The default behavior (false) is to display binary data using Base64 encoding without padding.

Version: 4Version: 3.0.0 (HIVE-1478623856)


Usage: beeline --convertBinaryArrayToString=true

--help

Display a usage message.

Usage: beeline --help

...

The following output formats are supported:

...

Expand
titleExample

Result of the query select id, value, comment from test_table

No Format
<resultset>
  <result>
    <id>1</id>
    <value>Value1</value>
    <comment>Test comment 1</comment>
  </result>
  <result>
    <id>2</id>
    <value>Value2</value>
    <comment>Test comment 2</comment>
  </result>
  <result>
    <id>3</id>
    <value>Value3</value>
    <comment>Test comment 3</comment>
  </result>
</resultset>

Separated-Value Output Formats

The values of a row are separated by different delimiters.
There are five separated-value output formats available: csv, tsv, csv2, tsv2 and dsv.

csv2, tsv2, dsv

Starting with Hive 0.14 there are improved SV output formats available, namely dsv, csv2 and tsv2.
These three formats differ only with the delimiter between cells, which is comma for csv2, tab for tsv2, and configurable for dsv.


json

(Hive 4.0) The result is displayed in JSON format where each row is a "result" element in the JSON array "resultset".

Expand
titleExample

Result of the query select `String`, `Int`, `Decimal`, `Bool`, `Null`, `Binary` from test_table

No Format
{"resultset":[{"String":"aaa","Int":1,"Decimal":3.14,"Bool":true,"Null":null,"Binary":"SGVsbG8sIFdvcmxkIQ"},{"String":"bbb","Int":2,"Decimal":2.718,"Bool":false,"Null":null,"Binary":"RWFzdGVyCgllZ2cu"}]}


jsonfile

(Hive 4.0) The result is displayed in JSON format where each row is a distinct JSON object.  This matches the expected format for a table created as JSONFILE formatFor the dsv format, the delimiter can be set with the delimiterForDSV option. The default delimiter is '|'.
Please be aware that only single character delimiters are supported.

Expand
titleExample

Result of the query select id, value, comment `String`, `Int`, `Decimal`, `Bool`, `Null`, `Binary` from test_table

csv2

No Format
id,value,comment
1,Value1,Test comment 1
2,Value2,Test comment 2
3,Value3,Test comment 3
 
{"String":"aaa","Int":1,"Decimal":3.14,"Bool":true,"Null":null,"Binary":"SGVsbG8sIFdvcmxkIQ"}
{"String":"bbb","Int":2,"Decimal":2.718,"Bool":false,"Null":null,"Binary":"RWFzdGVyCgllZ2cu"}


Separated-Value Output Formats

The values of a row are separated by different delimiters.
There are five separated-value output formats available: csv, tsv, csv2, tsv2 and dsv.

csv2, tsv2, dsv

Starting with Hive 0.14 there are improved SV output formats available, namely dsv, csv2 and tsv2.
These three formats differ only with the delimiter between cells, which is comma for csv2, tab for tsv2, and configurable for dsv.

For the dsv format, the delimiter can be set with the delimiterForDSV option. The default delimiter is '|'.
Please be aware that only single character delimiters are supported.

Expand
titleExample

Result of the query select id, value, comment from test_table

csv2

No Format
id,value,comment
1,Value1,

tsv2

No Format
id	value	comment
1	Value1	Test comment 1
2	Value2	Test comment 2
3	Value3	Test comment 3

dsv (the delimiter is |)

No Format
id|value|comment
1|Value1|Test comment 1
2|,Value2|,Test comment 2
3|,Value3|,Test comment 3
 

tsv2

No Format
id	value	comment
1	Value1	Test comment 1
2	Value2	Test comment 2
3	Value3	Test comment 3

dsv (the delimiter is |)

No Format
id|value|comment
1|Value1|Test comment 1
2|Value2|Test comment 2
3|Value3|Test comment 3


Quoting in csv2, tsv2 and dsv Formats

...

For versions earlier than 0.14, see the version note above.

Connection URL When ZooKeeper Service Discovery Is Enabled

...

jdbc:hive2://<host>:<port>/<db>;fetchsize=<value>


Info
titleFetch Size DetailsHive Version 4.0

The Hive JDBC driver will receive a preferred fetch size from the instance of HiveServer2 it has connected to.  This value is specified on the server by the hive.server2.thrift.resultset.default.fetch.size configuration.

The JDBC fetch size is only a hint and the server will attempt to respect the client's requested fetch size though with some limits.  HiveServer2 will cap all requests at a maximum value specified by the hive.server2.thrift.resultset.max.fetch.size configuration value regardless of the client's requested fetch size.

While a larger fetch size may limit the number of round-trips between the client and server, it does so at the expense of additional memory requirements on the client and server.

The default JDBC fetch size value may be overwritten, per statement, with the JDBC API:

  • Setting a value of 0 instructs the driver to use the fetch size value preferred by the server
  • Setting a value greater than zero will instruct the driver to fetch that many rows, though the actual number of rows returned may be capped by the server
  • If no fetch size value is explicitly set on the JDBC driver's statement then the driver's default value is used
    • If the fetch size value is specified within the JDBC connection string, this is the default value
    • If the fetch size value is absent from the JDBC connection string, the server's preferred fetch size is used as the default value

...

  • cookieAuth is set to true by default.
  • cookieName: If any of the incoming cookies' keys match the value of cookieName, the JDBC driver will not send any login credentials/Kerberos ticket to the server. The client will just send the cookie alone back to the server for authentication. The default value of cookieName is hive.server2.auth (this is the HiveServer2 cookie name). 
  • To turn off cookie replay, cookieAuth=false must be used in the JDBC URL.
  • Important Note: As part of HIVE-9709, we upgraded Apache http-client and http-core components of Hive to 4.4. To avoid any collision between this upgraded version of HttpComponents and other any versions that might be present in your system (such as the one provided by Apache Hadoop 2.6 which uses http-client and http-core components version of 4.2.5), the client is expected to set CLASSPATH in such a way that Beeline-related jars appear before HADOOP lib jars. This is achieved via setting HADOOP_USER_CLASSPATH_FIRST=true before using hive-jdbc. In fact, in bin/beeline.sh we do this!

Using 2-way SSL in HTTP Mode 

Info
titleVersion 1.2.0 and later

This option is available starting in Hive 1.2.0.

HIVE-10447 enabled the JDBC driver to support 2-way SSL in HTTP mode. Please note that HiveServer2 currently does not support 2-way SSL. So this feature is handy when there is an intermediate server such as Knox which requires client to support 2-way SSL.

  • .5), the client is expected to set CLASSPATH in such a way that Beeline-related jars appear before HADOOP lib jars. This is achieved via setting HADOOP_USER_CLASSPATH_FIRST=true before using hive-jdbc. In fact, in bin/beeline.sh we do this!

Using 2-way SSL in HTTP Mode 

Info
titleVersion 1.2.0 and later

This option is available starting in Hive 1.2.0.

HIVE-10447 enabled the JDBC driver to support 2-way SSL in HTTP mode. Please note that HiveServer2 currently does not support 2-way SSL. So this feature is handy when there is an intermediate server such as Knox which requires client to support 2-way SSL.

JDBC connection URL:

jdbc:hive2://<host>:<port>/<db>;ssl=true;twoWay=true;sslTrustStore=<trust_store_path>;trustStorePassword=<trust_store_password>;sslKeyStore=<key_store_path>;keyStorePassword=<key_store_password>;transportMode=http;httpPath=<http_endpoint>

  • <trust_store_path> is the path where the client's truststore file lives. This is a mandatory non-empty field.
  • <trust_store_password> is the password to access the truststore.
  • <key_store_path> is the path where the client's keystore file lives. This is a mandatory non-empty field.
  • <key_store_password> is the password to access the keystore.

For versions earlier than 0.14, see the version note above.

In the environment where exposing trustStorePassword and keyStorePassword in the connection URL is a security concern, a new option storePasswordPath is introduced with HIVE-27308 that can be used in URL instead of trustStorePassword and keyStorePassword. storePasswordPath value hold the path to the local keystore file storing the trustStorePassword and keyStorePassword aliases. When the existing trustStorePassword or keyStorePassword is present in URL along with storePasswordPath, respective password is directly obtained from password option.  Otherwise, fetches the particular alias from local keystore file(i.e., existing password options are preferred over storePasswordPath).

JDBC connection URL with storePasswordPathJDBC connection URL:

jdbc:hive2://<host>:<port>/<db>;ssl=true;twoWay=true;sslTrustStore=<trust_store_path>;trustStorePassword=<trust_store_password>;sslKeyStore=<key_store_path>;keyStorePassword=<key_store_password>?transportMode=http;httpPath=<http_endpoint>

  • <trust_store_path> is the path where the client's truststore file lives. This is a mandatory non-empty field.
  • <trust_store_password> is the password to access the truststore.
  • <key_store_path> is the path where the client's keystore file lives. This is a mandatory non-empty field.
  • <key_store_password> is the password to access the keystore.

<key_store_path>;storePasswordPath=store_password_path>;transportMode=http;httpPath=<http_endpoint>

A local keystore file can be created leveraging hadoop credential command with trustStorePassword and keyStorePassword aliases like below. And this file can be passed with storePasswordPath option in the connection URL.

hadoop credential create trustStorePassword -value mytruststorepassword -provider localjceks://file/tmp/client_creds.jceks

hadoop credential create keyStorePassword -value mykeystorepassword -provider localjceks://file/tmp/client_creds.jceksFor versions earlier than 0.14, see the version note above.

Passing HTTP Header Key/Value Pairs via JDBC Driver

...

For versions earlier than 0.14, see the version note above. 

Passing Custom HTTP Cookie Key/Value Pairs via JDBC Driver

...