20. Export data using WbExport

20.1. Memory usage and WbExport
20.2. Exporting Excel files
20.3. General WbExport parameters
20.4. Parameters for text export
20.5. Parameters for XML export
20.6. Parameters for type SQLUPDATE, SQLINSERT or SQLDELETEINSERT
20.7. Parameters for Spreadsheet types (ods, xslm, xls, xlsx)
20.8. Parameters for HTML export
20.9. Parameters for JSON export
20.10. Compressing export files
20.11. Examples

The WbExport exports contents of the database into external files, e.g. plain text ("CSV") or XML.

The WbExport command can be used like any other SQL command (such as UPDATE or INSERT). This includes the usage in scripts that are run in batch mode.

The WbExport command exports either the result of the next SQL Statement (which has to produce a result set) or the content of the table(s) specified with the -sourceTable parameter. The data is directly written to the output file and not loaded into memory. The export file(s) can be compressed ("zipped") on the fly. WbImport can import the zipped (text or XML) files directly without the need to unzip them.

If you want to save the data that is currently displayed in the result area into an external file, please use the Save Data as feature. You can also use the Database Explorer to export multiple tables.

[Note]

When using a SELECT based export, you have to run both statements (WbExport and SELECT) as one script. Either select both statements in the editor and choose SQLExecute selected, or make the two statements the only statements in the editor and choose SQLExecute all.

You can also export the result of a SELECT statement, by selecting the statement in the editor, and then choose SQLExport query result.

When exporting data into a Text or XML file, the content of BLOB columns is written into separate files. One file for each column of each row. Text files that are created this way can most probably only be imported using SQL Workbench/J as the main file will contain the filename of the BLOB data file instead of the actual BLOB data. The only other application that I know of, that can handle this type of imports is Oracle's SQL*Loader utility. If you run the text export together with the parameter -formatFile=oracle a control file will be created that contains the appropriate definitions to read the BLOB data from the external file.

[Important]
Oracles's BFILE, PostgreSQL's large object and SQL Server's filestream types are not real BLOB datatypes (from a JDBC point of view) and are currently not exported by WbExport. Only columns that are reported as BLOB, BINARY, VARBINARY or LONGVARBINARY in the column "JDBC type" in the DbExplorer will be exported correctly into a separate file.

20.1. Memory usage and WbExport

WbExport is designed to directly write the rows that are retrieved from the database to the export file without buffering them in memory (except for the XLS and XLSX formats)

Some JDBC drivers (e.g. PostgreSQL, jTDS and the Microsoft driver) read the full result obtained from the database into memory. In that case, exporting large results might still require a lot of memory. Please refer to the chapter Common problems for details on how to configure the individual drivers if this happens to you.

20.2. Exporting Excel files

If you need to export data for Microsoft Excel, additional libraries are required to write the native Excel formats (xls and the new xlsx introduced with Office 2007). Exporting the "SpreadsheetML" format introduced with Office 2003 does not require additional libraries.

SQL Workbench/J supports three different Excel file formats:

Value for -type parameterDescription
xlsm

This is the plain XML ("SpreadsheetML") format introduced with Office 2003. This format is always available and does not need any additional libraries.

Files with this format should be saved with the extension xml (otherwise Office is not able to open them properly)

xls

This is the old binary format using by Excel 97 up to 2003. To export this format, only poi.jar is needed. If the library is not available, this format will not be listed in the export dialog ("Save data as...")

Files with this format should be saved with the extension xls

xlsx

This is the "new" XML format (OfficeOpen XML) introduced with Office 2007. To create this file format, additionaly libraries are required. If those libraries are not available, this format will not be listed in the export dialog ("Save data as...")

Files with this format should be saved with the extension xlsx

For a comparison of the different Microsoft Office XML formats please refer to: http://en.wikipedia.org/wiki/Microsoft_Office_XML_formats

All needed Libraries are included in the "Generic package including all optional libraries".

[Important]

To write the file formats XLS and XLSX the entire file needs to be built in memory. When exporting results with a large number of rows this will require a substantial amount of memory.

WbExport and the "Max. Rows" option

When you use the WbExport command together with a SELECT query, the "Max. Rows" setting will be ignored for the export.

20.3. General WbExport parameters

WbExport supports conditional execution

ParameterDescription
-type

Possible values: text, sqlinsert, sqlupdate, sqldeleteinsert, sqlmerge, xml, ods, xlsm, xls, xlsx, html, json

Defines the type of the output file. sqlinsert will create the necessary INSERT statements to put the data into a table. If the records may already exist in the target table but you don't want to (or cannot) delete the content of the table before running the generated script, SQL Workbench/J can create a DELETE statement for every INSERT statement. To create this kind of script, use the sqldeleteinsert type.

sqlmerge will generate statements that will result in INSERT or UPDATE type of statements. The exact syntax depends on the current database. To select a syntax for a different DBMS, use the parameter -mergeType.

In order for this to work properly the table needs to have keycolumns defined, or you have to define the keycolumns manually using the -keycolumns switch.

sqlupdate will generate UPDATE statements that update all non-key columns of the table. This will only generate valid UPDATE statements if at least one key column is present. If the table does not have key columns defined, or you want to use different columns, they can be specified using the -keycolumns switch.

ods will generate a spreadsheet file in the OpenDocument format that can be opened e.g. with OpenOffice.org.

xlsm will generate a spreadsheet file in the Microsoft Excel 2003 XML format ("XML Spreadsheet"). When using Microsoft Office 2010, this export format should should be saved with a file extension of .xml in order to be identified correctly.

xls will generate a spreadsheet file in the proprietary (binary) format for Microsoft Excel (97-2003). The file poi.jar is required.

xlsx will generate a spreadsheet file in the default format introduced with Microsoft Office 2007. Additional external libraries are required in order to be able to use this format. Please read the note at the beginning of this section.

This parameter supports auto-completion.

-file

The output file to which the exported data is written.

This parameter is ignored if -outputDir is also specified.

-createDir If this parameter is set to true, SQL Workbench/J will create any needed directories when creating the output file.
-sourceTable

Defines a list of tables to be exported. If this switch is used, -outputdir is also required unless exactly one table is specified. If one table is specified, the -file parameter is used to generate the file for the table. If more then one table is specified, the -outputdir parameter is used to defined the directory where the generated files should be stored. Each file will be named as the exported table with the approriate extension (.xml, .sql, etc). You can specify * as the table name which will then export all tables accessible by the current user.

If you want to export tables from a different user or schema you can use a schema name combined with a wildcard e.g. -sourcetable=otheruser.*. In this case the generated output files will contain the schema name as part of the filename (e.g. otheruser.person.txt). When importing these files, SQL Workbench/J will try to import the tables into the schema/user specified in the filename. If you want to import them into a different user/schema, then you have to use the -schema switch for the import command.

This parameter supports auto-completion.

-schema

Define the schema in which the table(s) specified with -sourceTable are located. This parameter only accepts a single schema name. If you want to export tables from more than one schema, you need to fully qualify them as shown in the description of the -sourceTable parameter.

This parameter supports auto-completion.

-types

Selects the object types to be exported. By default only TABLEs are exported. If you want to export the content of VIEWs or SYNONYMs as well, you have to specify all types with this parameter.

-sourceTable=* -types=VIEW,SYNONYM or -sourceTable=T% -types=TABLE,VIEW,SYNONYM

This parameter supports auto-completion.

-excludeTables

The tables listed in this parameter will not be exported. This can be used when all but a few tables should be exported from a schema. First all tables specified through -sourceTable will be evaluated. The tables specified by -excludeTables can include wildcards in the same way, -sourceTable allows wildcards.

-sourceTable=* -excludeTables=TEMP* will export all tables, but not those starting with TEMP.

This parameter supports auto-completion.

-sourceTablePrefix

Define a common prefix for all tables listed with -sourceTable. When this parameter is specified the existence of each table is not tested any longer (as it is normally done).

When this parameter is specified the generated statement for exporting the table is changed to a SELECT * FROM [prefix]tableName instead of listing all columns individually.

This can be used when exporting views on tables, when for each table e.g. a view with a certain prefix exists (e.g. table PERSON has the view V_PERSON and the view does some filtering of the data.

This parameter can not be used to select tables from a specific schema. The prefix will be prepended to the table's name.

-outputDir When using the -sourceTable switch with multiple tables, this parameter is mandatory and defines the directory where the generated files should be stored.
-continueOnError When exporting more than one table, this parameter controls whether the whole export will be terminated if an error occurs during export of one of the tables.
-encoding

Defines the encoding in which the file should be written. Common encodings are ISO-8859-1, ISO-8859-15, UTF-8 (or UTF8). To get a list of available encodings, execute WbExport with the parameter -showEncodings. This parameter is ignored for XLS, XLSX and ODS exports.

This parameter supports auto-completion and if it is invoked for this parameter, it will show a list of encodings defined through the configuration property workbench.export.defaultencodings This is a comma-separated list that can be changed using WbSetConfig

-showEncodings Displays the encodings supported by your Java version and operating system. If this parameter is present, all other parameters are ignored.
-lineEnding

Possible values are: crlf, lf

Defines the line ending to be used for XML or text files. crlf puts the ASCII characters #13 and #10 after each line. This is the standard format on Windows® based systems. dos and win are synonym values for crlf, unix is a synonym for lf.

lf puts only the ASCII character #10 at the end of each line. This is the standard format on Unix based systems (unix is a synonym value for this format).

The default line ending used depends on the platform where SQL Workbench/J is running.

This parameter supports auto-completion.

-header

Possible values: true, false

If this parameter is set to true, the header (i.e. the column names) are placed into the first line of output file. The default is to not create a header line. You can define the default value for this parameter in the file workbench.settings. This parameter is valid for text and spreadsheet (OpenDocument, Excel) exports.

-compress

Selects whether the output file should be compressed and put into a ZIP archive. An archive will be created with the name of the specified output file but with the extension zip. The archive will then contain the specified file (e.g. if you specify data.txt, an archive data.zip will be created containing exactly one entry with the name data.txt). If the exported result set contains BLOBs, they will be stored in a separate archive, named data_lobs.zip.

When exporting multiple tables using the -sourcetable parameter, then SQL Workbench/J will create one ZIP archive for each table in the specified output directory with the filename "tablename".zip. For any table containing BLOB data, one additional ZIP archive is created.

-tableWhere

Defines an additional WHERE clause that is appended to all SELECT queries to retrieve the rows from the database. No validation check will be done for the syntax or the columns in the where clause.

If the specified condition is not valid for all exported tables, the export will fail.

-clobAsFile

Possible values: true, false

For SQL, XML and Text export this controls how the contents of CLOB fields are exported. Usually the CLOB content is put directly into the output file When generating SQL scripts with WbExport this can be a problem as not all DBMS can cope with long character literals (e.g. Oracle has a limit of 4000 bytes). When this parameter is set to true, SQL Workbench/J will create one file for each CLOB column value. This is the same behaviour as with BLOB columns.

Text files that are created with this parameter set to true, will contain the filename of the generated output file instead of the actual column value. When importing such a file using WbImport you have to specify the -clobIsFilename=true parameter. Otherwise the filenames will be stored in the database and not the clob data. This parameter is not necessary when importing XML exports, as WbImport will automatically recognize the external files.

[Important]
SQL exports (-type=sqlinsert) generated with -clobAsFile=true can only be used with SQL Workbench/J.

All CLOB files that are written using the encoding specified with the -encoding switch. If the -encoding parameter is not specified the default file encoding will be used.

-lobIdCols

When exporting CLOB or BLOB columns as external files, the filename with the LOB content is generated using the row and column number for the currently exported LOB column (e.g. data_r15_c4.data). If you prefer to have the value of a unique column combination as part of the file name, you can specify those columns using the -lobIdCols parameter. The filename for the LOB will then be generated using the base name of the export file, the column name of the LOB column and the values of the specified columns. If you export your data into a file called user_info and specify -lobIdCols=id and your result contains a column called img, the LOB files will be named e.g. user_info_img_344.data

-lobsPerDirectory

When exporting CLOB or BLOB columns as external files, the generated files can be distributed over several directories to avoid an excessive number of files in a single directory. The parameter lobsPerDirectory defines how many LOB files are written into a single directory. When the specified number of files have been written, a new directory is created. The directories are always created as a sub-directory of the target directory. The name for each directory is the base export filename plus "_lobs" plus a running number. So if you export the data into a file "the_big_table.txt", the LOB files will be stored in "the_big_table_lobs_1", "the_big_table_lobs_2", "the_big_table_lobs_3" and so on.

The directories will be created if needed, but if the directories already exist (e.g. because of a previous export) their contents will not be deleted!

-filenameColumn

When exporting CLOB or BLOB columns as external files, the complete filename can be taken from a column of the result set (instead of dynamically creating a new file name based on the row and column numbers).

This parameter only makes sense if exactly one BLOB column of a table is exported.

-append

Possible values: true,false

Controls whether results are appended to an existing file, or overwrite an existing file. This parameter is only supported for text, SQL, XLS and XLSX export types.

When used with XLS oder XSLX exports, a new worksheet will be created.

-dateFormat The date format to be used when writing date columns into the output file. This parameter is ignored for SQL exports.
-timestampFormat The format to be used when writing datetime (or timestamp) columns into the output file. This parameter is ignored for SQL exports.
-locale

The locale (language) to be used when formatting date and timestamp values. The language will only be relevant if the date or timestamp format contains placeholders that are language dependent (e.g. the name of the month or weekday).

This parameter supports code-completion.

-blobType

Possible values: file, dbms, ansi, base64, pghex

This parameter controls how BLOB data will be put into the generated SQL statements. By default no conversion will be done, so the actual value that is written to the output file depends on the JDBC driver's implementation of the Blob interface. It is only valid for Text, SQL and XML exports, although not all parameter values make sense for all export types.

The type base64 is primarily intended for Text exports.

The type pghex is intended to be used for export files that should be imported using PostgreSQL's COPY command.

The types dbms and ansi are intended for SQL exports and generate a representation of the binary data as part of the SQL statement. DBMS will use a format that is understood by the DBMS you are exporting from, while ansi will generate a standard hex based representation of the binary data. The syntax generated by the ansi format is not understood by all DBMS!

Two additional SQL literal formats are available that can be used together with PostgreSQL: pgDecode and pgEscape. pgDecode will generate a hex representation using PostgreSQL's decode() function. Using decode is a very compact format. pgEscape will use PostgreSQL's escaped octets, and generates much bigger statements (due to the increase escaping overhead).

When using file, base64 or ansi the file can be imported using WbImport

The parameter value file, will cause SQL Workbench/J to write the contents of each blob column into a separate file. The SQL statement will contain the SQL Workbench/J specific extension to read the blob data from the file. For details please refer to BLOB support. If you are planning to run the generated SQL scripts using SQL Workbench/J this is the recommended format.

[Important]
Note that SQL scripts generated with -blobType=file can only be used with SQL Workbench/J

The parameter value ansi, will generate "binary strings" that are compatible with the ANSI definition for binary data. MySQL and Microsoft SQL Server support these kind of literals.

The parameter value dbms, will create a DBMS specific "binary string". MySQL, HSQLDB, H2 and PostgreSQL are known to support literals for binary data. For other DBMS using this option will still create an ANSI literal but this might result in an invalid SQL statement.

This parameter supports auto-completion.

-replaceExpression -replaceWith

Using these parameters, arbitrary text can be replaced during the export. -replaceExpression defines the regular expression that is to be replaced. -replaceWith defines the replacement value. -replaceExpression='(\n|\r\n)' -replaceWith=' ' will replace all newline characters with a blank.

The search and replace is done on the "raw" data retrieved from the database before the values are converted to the corresponding output format. In particular this means replacing is done before any character escaping takes place.

Because the search and replace is done before the data is converted to the output format, it can be used for all export types (text, xml, Excel, ...).

Only character columns (CHAR, VARCHAR, CLOB, LONGVARCHAR) are taken into account.

-trimCharData

Possible values: true, false

If this parameter is set to true, values from CHAR columns will be trimmed from trailing whitespace. This is equivalent to the Trim CHAR data in the connection profile.

-literalFalse -literalTrue

These parameters define the values that should be exported for real boolean values. Both parameters must be specified. If only one of is provided, it is ignored.

-showProgress

Valid values: true, false, <numeric value>

Control the update frequence in the status bar (when running in GUI mode). The default is every 10th row is reported. To disable the display of the progress specify a value of 0 (zero) or the value false. true will set the progress interval to 1 (one).

-writeErrorToOutput

Possible values: true, false

If this parameter is set to true, any error that occurs during the processing of the export will be written "as-is" into the export file.

This will most likely cause a subsequent import of that file to fail, as the error message will mess up the formatting of the export file.

20.4. Parameters for text export

ParameterDescription
-delimiter

The given string sequence will be used to separate the values between two columns. The default is a tab character, equivalent to specifying: -delimiter=\t

The character can also be specified as a unicode escape sequence, e.g. -delimiter='\u0009'

-rowNumberColumn If this parameter is specified with a value, the value defines the name of an additional column that will contain the row number. The row number will always be exported as the first column. If the text file is not created with a header (-header=false) a value must still be provided to enable the creation of the additional column.
-quoteChar

The character (or sequence of characters) to be used to enclose column values that contain the delimiter. No quote character is used, unless one is specified. To set the double quote as the quote character you have to enclose it in single quotes: -quotechar='"'

The character can also be specified as a unicode escape sequence, e.g. -quoteChar='\u0022'

-quoteCharEscaping

Possible values: none, escape, duplicate

Defines how quote characters that appear in the actual data are written to the output file.

If no quote character has been with the -quoteChar switch, this option is ignored.

If escape is specified, a quote character that is embedded in the exported data is written as here is a \" quote character.

If duplicate is specified, a quote character that is embedded in the exported data is written as two quotes e.g. here is a "" quote character.

This parameter supports auto-completion.

-quoteAlways

Possible values: true, false

If quoting is enabled (via -quoteChar), then character data will normally only be quoted if the delimiter is found inside the actual value that is written to the output file. If -quoteAlways=true is specified, character data will always be enclosed in the specified quote character. This parameter is ignored if not quote character is specified. If you expect the quote character to be contained in the values, you should enable character escaping, otherwise the quote character that is part of the exported value will break the quote during import.

NULL values will not be quoted even if this parameter is set to true. This is useful to distinguish between NULL values and empty strings.

-decimal The decimal symbol to be used for numbers. The default is a dot e.g. the number Pi would be written as 3.14152 When using -decimal=',' the number Pi would be written as: 3,14152
-maxDigits

Defines a maximum number of decimal digits. If this parameter is not specified decimal values are exported according to the global formatting settings

Specifying a value of 0 (zero) results in exporting as many digits as available.

-fixedDigits

Defines a fixed number of decimal digits. If this parameter is not specified decimal values are exported according to the -maxDigits parameter (or the global default).

If this parameter is specified, all decimal values are exported with the defined number of digits. If -fixedDigits=4 is used, the value 1.2 to be written as 1.2000.

This parameter is ignored if -maxDigits is also provided.

-escapeText

This parameter controls the escaping of non-printable or non-ASCII characters. Valid options are

  • control which will escape everything below ASCII 32 (newline, tab, etc)
  • 7bit which will escape everything below ASCII 32 and above 126
  • 8bit which will escape everything below ASCII 32 and above 255
  • extended which will escape everything outside the range [32-126] and [161-255]
  • pgcopy which is the same as control but will write the characters as two byte hex values which are compatible with the input format for PostgreSQL's COPY.

This will write a "short-hand" representation of control characters (e.g. \n for a newline) and a unicode representation for characters above ASCII 126 (e.g. \u00F6 for ö). This file can only be imported using SQL Workbench/J (at least I don't know of any DBMS specific loader that will decode this properly).

If character escaping is enabled, then the quote character will be escaped inside quoted values and the delimiter will be escaped inside non-quoted values. The delimiter could also be escaped inside a quoted value if the delimiter falls into the selected escape range (e.g. a tab character).

To import a text file with escaped values using WbImport, the -decode=true must be used. Note that WbImport can not handle the pgcopy encoding.

This parameter supports auto-completion.

-nullString

Defines the string value that should be written into the output file for a NULL value. This value will be enclosed with the specified quote character only if -quoteAlways=true is specified as well.

-formatFile

Possible values: postgres, oracle, sqlserver, db2, mysql

This parameter controls the creation of a control file for the bulk load utilities of some DBMS.

  • postgres will create a SQL script with the necessary COPY syntax to import the generated text file
  • oracle will create a control file (.ctl) for Oracle's SQL*Loader utility
  • sqlserver will create a format file (.fmt) for Microsoft's bcp utility
  • db2will create a SQL script with a DB2 IMPORT command
  • mysqlwill create a SQL script with a MySQL LOAD DATA INFILE command

You can specify more than one format (separated by a comma). In that case one control file for each format will be created.

[Note]

The generated format file(s) are intended as a starting point for your own adjustments. Don't expect them to be complete.

This parameter supports auto-completion.

20.5. Parameters for XML export

ParameterDescription
-table The given tablename will be put into the <table> tag as an attribute.
-decimalThe decimal symbol to be used for numbers. The default is a dot (e.g. 3.14152)
-useCDATA

Possible values: true, false

Normally all data written into the xml file will be written with escaped XML characters (e.g. < will be written as &lt;). If you don't want that escaping, set -useCDATA=true and all character data (VARCHAR, etc) will be enclosed in a CDATA section.

With -useCDATA=true a HTML value would be written like this:

<![CDATA[<b>This is a title</b>]]>

With -useCDATA=false (the default) a HTML value would be written like this:

&lt;b&gt;This is a title&lt;/b&gt;

-xsltParameter A list of parameters (key/value pairs) that should be passed to the XSLT processor. When using e.g. the wbreport2liquibase.xslt stylesheet, the value of the author attribute can be set using -xsltParameter="authorName=42". This parameter can be provided multiple times for multiple parameters, e.g. when using wbreport2pg.xslt: -xsltParameter="makeLowerCase=42" -xsltParameter="useJdbcTypes=true"
-stylesheetThe name of the XSLT stylesheet that should be used to transform the SQL Workbench/J specific XML file into a different format. If -stylesheet is specified, -xsltoutput has to be specified as well.
-xsltOutput This parameter defines the output file for the XSLT transformation specified through the -styleSheet parameter
-verboseXML

Possible values: true, false

This parameter controls the tags that are used in the XML file and minor formatting features. The default is -verboseXML=true and this will generate more readable tags and formatting. However the overhead imposed by this is quite high. Using -verboseXML=false uses shorter tag names (not longer then two characters) and does put more information in one line. This output is harder to read for a human but is smaller in size which could be important for exports with large result sets.

20.6. Parameters for type SQLUPDATE, SQLINSERT or SQLDELETEINSERT

ParameterDescription
-table Define the tablename to be used for the UPDATE or INSERT statements. This parameter is required if the SELECT statement has multiple tables in the FROM list. table.
-charfunc

If this parameter is given, any non-printable character in a text/character column will be replaced with a call to the given function with the ASCII value as the parameter.

If -charfunc=chr is given (e.g. for an Oracle syntax), a CR (=13) inside a character column will be replaced with:

INSERT INTO ... VALUES ('First line'||chr(13)||'Second line' ... )

This setting will affect ASCII values from 0 to 31

-concat If the parameter -charfunc is used SQL Workbench/J will concatenate the individual pieces using the ANSI SQL operator for string concatenation. In case your DBMS does not support the ANSI standard (e.g. MS ACCESS) you can specify the operator to be used: -concat=+ defines the plus sign as the concatenation operator.
-sqlDateLiterals

Possible values: jdbc, ansi, dbms, default

This parameter controls the generation of date or timestamp literals. By default literals that are specific for the current DBMS are created. You can also choose to create literals that comply with the JDBC specification or ANSI SQL literals for dates and timestamps.

jdbc selects the creation of JDBC compliant literals. These should be usable with every JDBC based tool, including your own Java code: {d '2004-04-28'} or {ts '2002-04-02 12:02:00.042'}. This is the recommended format if you plan to use SQL Workbench/J (or any other JDBC based tool) to run the generated statements.

ansi selects the creation of ANSI SQL compliant date literals: DATE '2004-04-28' or TIMESTAMP '2002-04-02 12:04:00'. Please consult the manual of the target DBMS, to find out whether it supports ANSI compliant date literals.

default selects the creation of quoted date and timestamp literals in ISO format (e.g. '2004-04-28'). Several DBMS support this format (e.g. PostgreSQL, Microsoft SQL Server)

dbms selects the creation of specific literals to be used with the current DBMS (using e.g. the to_date() function for Oracle). The format of these literals can be customized if necessary in workbench.settings using the keys workbench.sql.literals.[type].[datatype].pattern where [type] is the type specified with this parameter and [datatype] is one of time, date, timestamp. If you add new literal types, please also adjust the key workbench.sql.literals.types which is used to show the possible values in the GUI (auto-completion "Save As" dialog, Options dialog). If no type is specified (or dbms), SQL Workbench/J first looks for an entry where [type] is the current dbid. If no value is found, default is used.

You can define the default literal format to be used for the WbExport command in the options dialog.

This parameter supports auto-completion.

-commitEvery

A numeric value which identifies the number of INSERT or UPDATE statements after which a COMMIT is put into the generated SQL script.

-commitEvery=100

will create a COMMIT; after every 100th statement.

If this is not specified one COMMIT; will be added at the end of the script. To suppress the final COMMIT, you can use -commitEvery=none. Passing -commitEvery=atEnd is equivalent to -commitEvery=0

-createTable

Possible values: true, false

If this parameter is set to true, the necessary CREATE TABLE command is put into the output file. This parameter is ignored when creating UPDATE statements.

Note that this will only create the table including its primary key. This will not create other constraints (such as foreign key or unique constraints) nor will it create indexes on the target table.

-useSchema

Possible values: true, false

If this parameter is set to true, all table names are prefixed with the appropriate schema. The default is taken from the global option Include owner in export

-keyColumns

A comma separated list of column names that occur in the table or result set that should be used as the key columns for UPDATE or DELETE

If the table does not have key columns, or the source SELECT statement uses a join over several tables, or you do not want to use the key columns defined in the database, this key can be used to define the key columns to be used for the UPDATE statements. This key overrides any key columns defined on the base table of the SELECT statement.

-includeAutoIncColumns

Possible values: true, false

Default value: defined by global option

With this parameter you can override the global option to include identity and auto-increment column for INSERT statements.

-includeReadOnlyColumns

Possible values: true, false

Default value: false

By default, columns that are marked as read-only by the JDBC driver or are defined as a computed column are not part of generated SQL statements. By setting this parameter to true, those columns will be included in INSERT statements.

-useMultiRowInserts

Possible values: true, false

Default value: false

This option is only valid when using -type=sqlinsert

When generating INSERT statements, one statement per row will be generated. To generate more efficient multi-row inserts, specify true for this parameter.

20.7. Parameters for Spreadsheet types (ods, xslm, xls, xlsx)

ParameterDescription
-titleThe name to be used for the worksheet
-infoSheet

Possible values: true, false

Default value: false

If set to true, a second worksheet will be created that contains the generating SQL of the export. For ods exports, additional export information is available in the document properties.

-fixedHeader

Possible values: true, false

Default value: true unless a target sheet is specified

If set to true, the header row will be "frozen" in the Worksheet so that it will not scroll out of view.

-autoFilter

Possible values: true, false

Default value: true unless a target sheet is specified

If set to true, the "auto-filter" feature for the column headers will be turned on.

-autoColWidth

Possible values: true, false

Default value: true unless a target sheet is specified

If set to true, the width of the columns is adjusted to the width of the content.

-targetSheet -targetSheetName

Possible values: any valid index or name for a worksheet in an existing Excel file

This parameter is only available for XLS and XLSX exports

When using this parameter, the data will be written into an existing file and worksheet without changing the formatting in the spreadsheet. No formatting is applied as it is assumed that the target worksheet is properly set up.

[Note]

The parameters -autoFilter, -fixedHeader and -autoColWidth can still be used. If -targetSheet or -targetSheetName are specified they default to false unless they are explicitely passed as true.

If the parameters -dateFormat or -timestampFormat are specified together with a target sheet, the format for date/timestamp columns in the Excel sheet will be overwritten. To overwrite the format in the Excel sheet, those parameters must be specified explicitely.

If this parameter is used, the target file specified with the -file parameter must already exist

If -targetSheet is supplied, the value for -targetSheetName is ignored

These parameters support auto-completion if the -file parameter is already supplied.

-offset

Possible values: either a column/row combination or a cell reference in Excel format ("D3")

This parameter is only available for XLS and XLSX exports

When this parameter is specified the data is written starting at the specified location. No data will be written above or to the left of the specified cell.

The values can be given as a numeric row column combination, e.g. -offset=5,6. Data will then be written starting with the fifth column in the sixth row. Alternatively the reference can be specified as an Excel reference: -offset=C5.

-formulaColumns

A comma separated list of column names (from the export result) that should be treated as formulas in the generated spreadsheet.

The content of the value retrieved from the database is taken "as-is" into the spreadsheet cell which is then marked as containing a formula. Errors in the formula are not reported. The SQL on which the export is based, needs to generate the correct syntax for the formula.

No formatting will be applied for those cells.

-maxDigits

This parameter is only valid for XLS or XLSX exports and defines the maximum number of digits to be displayed for decimal values. This is only applied to the display formatting, the actual values will contain the original value as retrieved from the database.

-fixedDigits

This parameter is only valid for XLS or XLSX exports and defines a fixed number of decimal digits to be used when applyin an Excel format for decimal columns. This is only applied to the display formatting, the actual values will contain the original value as retrieved from the database.

This parameter is ignored if -maxDigits is also provided.

20.8. Parameters for HTML export

ParameterDescription
-createFullHTML

Possible values: true, false

Default value: true

If this is set to true, a full HTML page (including <html>, <body> tags) will be created.

-escapeHTML

Possible values: true, false

Default value: true

If this is set to true, values inside the data will be escaped (e.g. the < sign will be written as &lt;) so that they are rendered properly in an HTML page. If your data contains HTML tag that should be saved as HTML tags to the output, this parameter must be false.

-titleThe title for the HTML page (put into the <title> tag of the generated output)
-preDataHtml

With this parameter you can specify a HTML chunk that will be added before the export data is written to the output file. This can be used to e.g. create a heading for the data: -preDataHtml='<h1>List of products</h1>'.

The value will be written to the output file "as is". Any escaping of the HTML must be provided in the parameter value.

-postDataHtml

With this parameter you can specify a HTML chunk that will be added after the data has been written to the output file.

20.9. Parameters for JSON export

ParameterDescription
-nullString

Defines the string value that should be written into the output file for a NULL value.

20.10. Compressing export files

The WbExport command supports compressing of the generated output files. This includes the "main" export file as well as any associated LOB files.

When using WbImport you can import the data stored in the archives without unpacking them. Simply specify the archive name with the -file parameter. SQL Workbench/J will detect that the input file is an archive and will extract the information "on the fly". Assume the following export command:

WbExport -type=text -file=/home/data/person.txt -compress=true -sourceTable=person;

This command will create the file /home/data/person.zip that will contain the specified person.txt. To import this export into the table employee, you can use the following command:

WbImport -type=text -file=/home/data/person.zip -table=employee;

Assuming the PERSON table had a BLOB colum (e.g. a picture of the person), the WbExport command would have created an additional file called person_blobs.zip that would contain all BLOB data. The WbImport command will automatically read the BLOB data from that archive.

20.11. Examples

20.11.1. Simple plain text export

WbExport -type=text
         -file='c:/data/data.txt'
         -delimiter='|'
         -decimal=','
         -sourcetable=data_table;

Will create a text file with the data from data_table. Each column will be separated with the character | Each fractional number will be written with a comma as the decimal separator.

20.11.2. Exporting multiple tables

WbExport -type=text
         -outputDir='c:/data'
         -delimiter=';'
         -header=true
         -sourcetable=table_1, table_2, table_3, table_4;

This will export each specified table into a text file in the specified directory. The files are named "table_1.txt", "table_2.txt" and so on. To export all tables of a schema, the -sourceTable parameter supports wildcards:

WbExport -type=text
         -outputDir='c:/data'
         -delimiter=';'
         -header=true
         -sourcetable=my_schema.*;

Limiting the export data when using a table based export, can be done using the -tableWhere argument. This requires that the specified WHERE condition is valid for all tables, e.g. when every table has a column called MODIFIED_DATE

WbExport -type=text
         -outputDir='c:/data'
         -delimiter=';'
         -header=true
         -tableWhere="WHERE modified_date > DATE '2009-04-02'"
         -sourcetable=table_1, table_2, table_3, table_4;

This will add the specified where clause to each SELECT, so that only rows are exported that were changed after April 2nd, 2009

20.11.3. Export based on a SELECT statement

WbExport -type=text
         -file='c:/data/data.txt'
         -delimiter=','
         -decimal=','
         -dateFormat='yyyy-MM-dd';
SELECT * FROM data_table;

20.11.4. Export a complete schema

To export all tables from the current connection into tab-separated files and compress the files, you can use the following statement:

WbExport -type=text
         -outputDir=c:/data/export
         -compress=true
         -sourcetable=*;

This will create one zip file for each table containing the exported data as a text file. If a table contains BLOB columns, the blob data will be written into a separate zip file.

The files created by the above statement can be imported into another database using the following command:

WbImport -type=text
         -sourceDir=c:/data/export
         -extension=zip
         -checkDependencies=true;

20.11.5. Export as SQL INSERT script

To generate a file that contains INSERT statements that can be executed on the target system, the following command can be used:

WbExport -type=sqlinsert
         -file='c:/data/newtable.sql'
         -table=newtable;
SELECT * FROM table1, table2
WHERE table1.column1 = table2.column1;

will create a SQL script which that contains statements like INSERT INTO newtable (...) VALUES (...); and the list of columns are all columns that are defined by the SELECT statement.

If the parameter -table is omitted, the creation of SQL INSERT statements is only possible, if the SELECT is based on a single table (or view).

20.11.6. Exporting LOB data

[Note]

To extract the contents of CLOB columns you have to specify the parameter -clobAsFile=true, otherwise the contents of the CLOB columns will be written directly into the export file. BLOB columns will always be exported into separate tables.

When exporting tables that contain BLOB columns, one file for each blob column and row will be created. By default the generated filenames will contain the row and column number to make the names unique. You can however control the creation of filenames when exporting LOB columns using several different approaches. If a unique name is stored within the table you can use the -filenameColumn parameter to generate the filenames based on the contents of that column:

WbExport -file='c:/temp/blob_table.txt'
         -type=text
         -delimiter=','
         -filenameColumn=file_name;

Will create the file blob_table.txt and for each blob a file where the name is retrieved from the column BLOB_TABLE.FILE_NAME. Note that if the filename column is not unique, blob files will be overwritten without an error message.

You can also base the export on a SELECT statement and then generate the filename using several columns:

WbExport -file='c:/temp/blob_table.txt'
         -type=text
         -delimiter=','
         -filenameColumn=fname;
SELECT blob_column, 'data_'||id_column||'_'||some_name||'.'||type_column as fname
FROM blob_table;

This examples assumes that the following columns are part of the table blob_table: id_column, some_name and type_column. The filenames for the blob of each row will be taken from the computed column fname. To be able to reference the column in the WbExport you must give it an alias.

This approach assumes that only a single blob column is exported. When exporting multiple blob columns from a single table, it's only possible to create unique filenames using the row and column number (the default behaviour).

20.11.7. Replace data during export

When writing the export data, values in character columns can be replaced using regular expressions.

WbExport -file='/path/to/export.txt'
         -type=text
         -replaceExpression='(\n|\r\n)' -replaceWith='*'
         -sourceTable=export_table;

This will replace each newline (either Windows®' CR/LF or Unix LF) with the character *.

The value for -replaceExpression defines a regular expression. In the example above multiple new lines will be replace with multiple * characters. To replace consecutive new lines with a single * character, use the regular expression -replaceExpression='(\n|\r\n)+'. (Note the + sign after the brackets)