...
Option | Description |
---|---|
-u <database URL> | The JDBC URL to connect to. Special characters in parameter values should be encoded with URL encoding if needed. Usage: |
-r | Reconnect to last used URL (if a user has previously used Usage: Version: 2.1.0 (HIVE-13670) |
-n <username> | The username to connect as. Usage: |
-p <password> | The password to connect as. Usage: Optional password mode: Starting Hive 2.2.0 (HIVE-13589) the argument for -p option is optional. Usage : beeline -p [valid_password] If the password is not provided after -p Beeline will prompt for the password while initiating the connection. When password is provided Beeline uses it initiate the connection without prompting. |
-d <driver class> | The driver class to use. Usage: |
-e <query> | Query that should be executed. Double or single quotes enclose the query string. This option can be specified multiple times. Usage: Support to run multiple SQL statements separated by semicolons in a single query_string: 1.2.0 (HIVE-9877) |
-f <file> | Script file that should be executed. Usage: Version: 0.12.0 (HIVE-4268) |
-i (or) --init <file or files> | The init files for initialization Usage: Single file: Version: 0.14.0 (HIVE-6561) Multiple files: Version: 2.1.0 (HIVE-11336) |
-w (or) --password-file <password file> | The password file to read password from. Version: 1.2.0 (HIVE-7175) |
-a (or) --authType <auth type> | The authentication type passed to the jdbc as an auth property Version: 0.13.0 (HIVE-5155) |
--property-file <file> | File to read configuration properties from Usage: Version: 2.2.0 (HIVE-13964) |
--hiveconf property=value | Use value for the given configuration property. Properties that are listed in hive.conf.restricted.list cannot be reset with hiveconf (see Restricted List and Whitelist). Usage: Version: 0.13.0 (HIVE-6173) |
--hivevar name=value | Hive variable name and value. This is a Hive-specific setting in which variables can be set at the session level and referenced in Hive commands or queries. Usage: |
--color=[true/false] | Control whether color is used for display. Default is false. Usage: (Not supported for Separated-Value Output formats. See HIVE-9770) |
--showHeader=[true/false] | Show column names in query results (true) or not (false). Default is true. Usage: |
--headerInterval=ROWS | The interval for redisplaying column headers, in number of rows, when outputformat is table. Default is 100. Usage: (Not supported for Separated-Value Output formats. See HIVE-9770) |
--fastConnect=[true/false] | When connecting, skip building a list of all tables and columns for tab-completion of HiveQL statements (true) or build the list (false). Default is true. Usage: |
--autoCommit=[true/false] | Enable/disable automatic transaction commit. Default is false. Usage: |
--verbose=[true/false] | Show verbose error messages and debug information (true) or do not show (false). Default is false. Usage: |
--showWarnings=[true/false] | Display warnings that are reported on the connection after issuing any HiveQL commands. Default is false. Usage: |
--showDbInPrompt=[true/false] | Display the current database name in prompt. Default is false. Usage: Version: 2.2.0 (HIVE-14123) |
--showNestedErrs=[true/false] | Display nested errors. Default is false. Usage: |
--numberFormat=[pattern] | Format numbers using a DecimalFormat pattern. Usage: |
--force=[true/false] | Continue running script even after errors (true) or do not continue (false). Default is false. Usage: |
--maxWidth=MAXWIDTH | The maximum width to display before truncating data, in characters, when outputformat is table. Default is to query the terminal for current width, then fall back to 80. Usage: |
--maxColumnWidth=MAXCOLWIDTH | The maximum column width, in characters, when outputformat is table. Default is 50 in Hive version 2.2.0+ (see HIVE-14135) or 15 in earlier versions. Usage: |
--silent=[true/false] | Reduce the amount of informational messages displayed (true) or not (false). It also stops displaying the log messages for the query from HiveServer2 (Hive 0.14 and later) and the HiveQL commands (Hive 1.2.0 and later). Default is false. Usage: |
--autosave=[true/false] | Automatically save preferences (true) or do not autosave (false). Default is false. Usage: |
--outputformat=[table/vertical/csv/tsv/dsv/csv2/tsv2] | Format mode for result display. Default is table. See 82903124 Separated-Value Output Formats below for description of recommended sv options. Usage: Version: dsv/csv2/tsv2 added in 0.14.0 (HIVE-8615) |
--truncateTable=[true/false] | If true, truncates table column in the console when it exceeds console length. Version: 0.14.0 (HIVE-6928) |
--delimiterForDSV= DELIMITER | The delimiter for delimiter-separated values output format. Default is '|' character. Version: 0.14.0 (HIVE-7390) |
--isolation=LEVEL | Set the transaction isolation level to TRANSACTION_READ_COMMITTED Usage: |
--nullemptystring=[true/false] | Use historic behavior of printing null as empty string (true) or use current behavior of printing null as NULL (false). Default is false. Usage: Version: 0.13.0 (HIVE-4485) |
--incremental=[true/false] | Defaults to |
--incrementalBufferRows=NUMROWS | The number of rows to buffer when printing rows on stdout, defaults to 1000; only applicable if Usage: Version: 2.3.0 (HIVE-14170) |
--maxHistoryRows=NUMROWS | The maximum number of rows to store Beeline history. Version: 2.3.0 (HIVE-15166) |
--delimiter=; | Set the delimiter for queries written in Beeline. Multi-char delimiters are allowed, but quotation marks, slashes, and -- are not allowed. Defaults to ; Usage: Version: 3.0.0 (HIVE-10865) |
--convertBinaryArrayToString=[true/false] | Display binary column data as string or as byte array. Usage: a string using the platform's default character set. The default behavior (false) is to display binary data using: Version: 3.0.0 (HIVE-14786) Display binary column data as a string using the UTF-8 character set. The default behavior (false) is to display binary data using Base64 encoding without padding. Version: 4Version: 3.0.0 (HIVE-1478623856) Usage: |
--help | Display a usage message. Usage: |
...
The following output formats are supported:
- table
- vertical
- 82903124xmlattr82903124
- xmlelements82903124
- HiveServer2 Clients#json
- 82903124HiveServer2 Clients#jsonfile
- separated-value formats (csv, tsv, csv2, tsv2, dsv)
...
Expand | ||
---|---|---|
| ||
Result of the query
|
Separated-Value Output Formats
The values of a row are separated by different delimiters.
There are five separated-value output formats available: csv, tsv, csv2, tsv2 and dsv.
csv2, tsv2, dsv
Starting with Hive 0.14 there are improved SV output formats available, namely dsv, csv2 and tsv2.
These three formats differ only with the delimiter between cells, which is comma for csv2, tab for tsv2, and configurable for dsv.
json
(Hive 4.0) The result is displayed in JSON format where each row is a "result" element in the JSON array "resultset".
Expand | ||
---|---|---|
| ||
Result of the query
|
jsonfile
(Hive 4.0) The result is displayed in JSON format where each row is a distinct JSON object. This matches the expected format for a table created as JSONFILE formatFor the dsv format, the delimiter can be set with the delimiterForDSV
option. The default delimiter is '|'.
Please be aware that only single character delimiters are supported.
Expand | |||
---|---|---|---|
| |||
Result of the query csv2
|
Separated-Value Output Formats
The values of a row are separated by different delimiters.
There are five separated-value output formats available: csv, tsv, csv2, tsv2 and dsv.
csv2, tsv2, dsv
Starting with Hive 0.14 there are improved SV output formats available, namely dsv, csv2 and tsv2.
These three formats differ only with the delimiter between cells, which is comma for csv2, tab for tsv2, and configurable for dsv.
For the dsv format, the delimiter can be set with the delimiterForDSV
option. The default delimiter is '|'.
Please be aware that only single character delimiters are supported.
Expand | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
Result of the query csv2
tsv2
dsv (the delimiter is |)
tsv2
dsv (the delimiter is |)
|
Quoting in csv2, tsv2 and dsv Formats
...
For versions earlier than 0.14, see the version note above.
Connection URL When ZooKeeper Service Discovery Is Enabled
...
- cookieAuth is set to
true
by default. - cookieName: If any of the incoming cookies' keys match the value of cookieName, the JDBC driver will not send any login credentials/Kerberos ticket to the server. The client will just send the cookie alone back to the server for authentication. The default value of cookieName is hive.server2.auth (this is the HiveServer2 cookie name).
- To turn off cookie replay, cookieAuth=false must be used in the JDBC URL.
- Important Note: As part of HIVE-9709, we upgraded Apache http-client and http-core components of Hive to 4.4. To avoid any collision between this upgraded version of HttpComponents and other any versions that might be present in your system (such as the one provided by Apache Hadoop 2.6 which uses http-client and http-core components version of 4.2.5), the client is expected to set CLASSPATH in such a way that Beeline-related jars appear before HADOOP lib jars. This is achieved via setting HADOOP_USER_CLASSPATH_FIRST=true before using hive-jdbc. In fact, in bin/beeline.sh we do this!
Using 2-way SSL in HTTP Mode
Info | ||
---|---|---|
| ||
This option is available starting in Hive 1.2.0. |
HIVE-10447 enabled the JDBC driver to support 2-way SSL in HTTP mode. Please note that HiveServer2 currently does not support 2-way SSL. So this feature is handy when there is an intermediate server such as Knox which requires client to support 2-way SSL.
- .5), the client is expected to set CLASSPATH in such a way that Beeline-related jars appear before HADOOP lib jars. This is achieved via setting HADOOP_USER_CLASSPATH_FIRST=true before using hive-jdbc. In fact, in bin/beeline.sh we do this!
Using 2-way SSL in HTTP Mode
Info | ||
---|---|---|
| ||
This option is available starting in Hive 1.2.0. |
HIVE-10447 enabled the JDBC driver to support 2-way SSL in HTTP mode. Please note that HiveServer2 currently does not support 2-way SSL. So this feature is handy when there is an intermediate server such as Knox which requires client to support 2-way SSL.
JDBC connection URL:
jdbc:hive2://<host>:<port>/<db>;ssl=true;twoWay=true;
sslTrustStore=<trust_store_path>;trustStorePassword=<trust_store_password>;sslKeyStore=<key_store_path>;keyStorePassword=<key_store_password>;
transportMode=http;httpPath=<http_endpoint>
- <trust_store_path> is the path where the client's truststore file lives. This is a mandatory non-empty field.
- <trust_store_password> is the password to access the truststore.
- <key_store_path> is the path where the client's keystore file lives. This is a mandatory non-empty field.
- <key_store_password> is the password to access the keystore.
For versions earlier than 0.14, see the version note above.
In the environment where exposing trustStorePassword
and keyStorePassword
in the connection URL is a security concern, a new option storePasswordPath
is introduced with HIVE-27308 that can be used in URL instead of trustStorePassword
and keyStorePassword
. storePasswordPath
value hold the path to the local keystore file storing the trustStorePassword
and keyStorePassword
aliases. When the existing trustStorePassword
or keyStorePassword
is present in URL along with storePasswordPath
, respective password is directly obtained from password option. Otherwise, fetches the particular alias from local keystore file(i.e., existing password options are preferred over storePasswordPath
).
JDBC connection URL with storePasswordPath
JDBC connection URL:
jdbc:hive2://<host>:<port>/<db>;ssl=true;twoWay=true;
sslTrustStore=<trust_store_path>;trustStorePassword=<trust_store_password>;sslKeyStore=<key_store_path>;keyStorePassword=<key_store_password>?transportMode=http;httpPath=<http_endpoint>
- <trust_store_path> is the path where the client's truststore file lives. This is a mandatory non-empty field.
- <trust_store_password> is the password to access the truststore.
- <key_store_path> is the path where the client's keystore file lives. This is a mandatory non-empty field.
- <key_store_password> is the password to access the keystore.
<key_store_path>;storePasswordPath=store_password_path>;
transportMode=http;httpPath=<http_endpoint>
A local keystore file can be created leveraging hadoop credential command with trustStorePassword
and keyStorePassword
aliases like below. And this file can be passed with storePasswordPath
option in the connection URL.
hadoop credential create trustStorePassword -value mytruststorepassword -provider localjceks://file/tmp/client_creds.jceks
hadoop credential create keyStorePassword -value mykeystorepassword -provider localjceks://file/tmp/client_creds.jceksFor versions earlier than 0.14, see the version note above.
Passing HTTP Header Key/Value Pairs via JDBC Driver
...
For versions earlier than 0.14, see the version note above.
Passing Custom HTTP Cookie Key/Value Pairs via JDBC Driver
...