#define Name | Explanation |
OTL_DB2_CLI | for DB2 Call Level Interface (CLI) |
OTL_ODBC_ ENTERPRISEDB |
for Enterprise DB ODBC
provider. Enterprise DB is a variant of PostgreSQL that was
made compatible with Oracle (up to some extent) and
commercially available. |
OTL_INFORMIX_CLI,
OTL_INFORMIX_CLI_64_BIT |
for Informix Call Level
Interface (32-bit and 64-bit) for Unix (when OTL_ODBC_UNIX is enabled). |
OTL_IODBC_BSD | for ODBC on BSD Unix, when
iODBC package is used |
OTL_ODBC | for ODBC |
OTL_ODBC_MSSQL_2005 |
Microsoft SQL Server 2005
requires special treatment when VARCHAR(MAX),
VARBINARY(MAX), and NVARCHAR(MAX) are used. MS SQL 2005's
Native SQL Client (SNAC) handles the new XXX(MAX) data types
in a different way, compared with TEXT, NTEXT, and IMAGE
data types. If the XXX(MAX) types are not used, #define
OTL_ODBC can be used. Otherwise, this new #define
OTL_ODBC_MSSQL_2005 should be used. |
OTL_ODBC_ MSSQL_2008 |
MS SQL Server 2008 has new
features like datetime2, date, time, file stream based
VARBINARY(MAX), etc. This #define enables OTL support for
most of the new features. This #define is good to be used
with SQL Server SQL 2008, SQL Server 2008 Release 2, and SQL
Server 2012, SQL Server 2014, and SQL Server 2016. |
OTL_ODBC_ MULTI_MODE |
This #define should be used
when there is a need to connect via ODBC to more than one
database type from the same time. For more detail see
otl_connect::set_connection_mode(),
and
OTL
example
675. |
OTL_ODBC_LEGACY_RPC |
This #define should be used with SQLite ODBC
drivers or when used with ODBC drivers, and otl_connect::direct_exec()
returns 0 when it's supposed to return a positive integer
(a.k.a. rows processed count). |
OTL_ODBC_MYSQL | for MyODBC/MySQL. The
difference between OTL_ODBC_MYSQL and OTL_ODBC is that
transactional ODBC function calls are turned off for
OTL_ODBC_MYSQL, since MySQL does not have transactions,
unless innoDB table type is used.. This #define should only
used with the MyODBC 2.5, which is very old at this point in
time. See MySQL based OTL code examples for more detail. |
OTL_ODBC_ POSTGRESQL |
PostgreSQL ODBC can be used
with the standard #define OTL_ODBC.
However, PostgreSQL has at least two ODBC drivers, and
some of them should be used with #define
OTL_ODBC_POSTGRESQL. The following list specifies the
differences between #define OTL_ODBC and #define
OTL_ODBC_POSTGRESQL in more detail:
|
OTL_ODBC_ TIMESTEN_UNIX |
for TimesTen in Unix/Linux.
TimesTen supports ODBC. Unlike many other database systems,
where ODBC API support may be much slower than the
proprietary interface, ODBC is the native TimesTen interface that operates
directly with the database engine. TimesTen ODBC driver has
the following extensions that are available through OTL:
See "how to compile TimesTen in Linux/Unix" for more detail. |
OTL_ODBC_ TIMESTEN_WIN |
for TimesTen in Windows.
TimesTen supports ODBC. Unlike many other database systems,
where ODBC API support may be much slower than the
proprietary interface, ODBC is the native TimesTen interface that operates
directly with the database engine. ODBC driver has the
following extensions that are available through OTL:
|
OTL_ODBC_UNIX | for ODBC bridges in Unix |
OTL_ODBC_zOS | for ODBC on IBM zOS. |
OTL_ODBC_XTG_IBASE6 | for Interbase 6.x via XTG Systems' ODBC driver. The reason for introducing this #define is that the ODBC driver is the only Open Source ODBC driver for Interbase. Other drivers, like Easysoft's ODBC for Interbase, are commercial products, and it beats the purpose of using Interbase, as an Open Source.database server. |
OTL_ORA8 | for OCI8 |
OTL_ORA8I | for OCI8i |
OTL_ORA9I | for OCI9i. All code that compiles and works under #define OTL_ORA8, and OTL_ORA8I, should work when OTL_ORA9I is used |
OTL_ORA10G |
for OCI10g. All code that
compiles and works under #define OTL_ORA8, OTL_ORA8I,
OTL_ORA9I, should work with OTL_ORA10G. |
OTL_ORA10G_R2 | for OCI10g, Release 2 (Oracle
10.2). All code that compiles and works under #define
OTL_ORA8, OTL_ORA8I, OTL_ORA9I, and OTL_ORA10G should work
with OTL_ORA10G_R2 . |
OTL_ORA11G |
for OCI11g Release 1 (Oracle 11.1). All code that compiles and works under #define OTL_ORA8, OTL_ORA8I, OTL_ORA9I, OTL_ORA10G, and OTL_ORA10G_R2 should work with OTL_ORA11G. |
OTL_ORA11G_R2 |
for OCI 11G Release 2 (Oracle
11.2). All code that compiles and works under #defines
OTL_ORA8-11G should work with OTL_ORA11G_R2. |
OTL_ORA12C |
for OCI 12c (Oracle 12.1). All code that
compiles under #defines OTL_ORA8-11G_R2 should work with
OTL_ORA12C. |
OTL_ORA12C_R2 |
for OCI 12c R2 (Oracle 12.2). All code that
compiles under #defines OTL_ORA8-12C should work under
OTL_ORA12C_R2 |
OTL_ORA18C |
for OCI 18c. All code that compiles under
#defines OTL_ORA8-12C_R2 should work under OTL_ORA18C. |
OTL_ORA19C |
for OCI 19c and OCI 21c. All code that compiles under #defines OTL_ORA8-18C should work under OTL_ORA19C. |
#define |
Explanation |
OTL_ACE | (the same as #define OTL_STL, only for use with Adaptive Communication Environment (ACE)). This #defines makes OTL compile with ACE. Most features of OTL, which require #define OTL_STL to be on, compile with ACE, except for otl_output_iterator, otl_input_iterator., and the STL vector based PL/SQL table container classes (otl_XXX_vec). OTL stream iterators were not implemented for ACE since the concept of stream iterators is not present in ACE. Same with with the otl_XXX_vec: vectors are not implemented in ACE. ACE has only dynamic arrays with dynamically defined sizes. |
OTL_ADD _NULL _TERMINATOR _TO_STRING _SIZE. |
This #define enables the addition of one byte / Unicode character to the size of a string buffer, when the buffer gets allocated on the program's heap. This alleviates the burden of remembering that an extra byte / Unicode character needs to be added to the string buffer size to accommodate the string's NULL terminator. |
OTL_ANSI_CPP | for turning on ANSI C++ compliance mode: ANSI C++ typecasts (static_cast<>, const_cast<>, reinterpret_cast<> instead of C-style typecasts), optional function throw clauses, typename instead of class keywords in class type template class parameters, etc. |
OTL_ANSI_CPP_11_ VARIADIC_TEMPLATES |
This #define can be used to enable OTL features that depend on C++11
variadic templates for C++11 compliant compilers that OTL
doesn't support automatically. As of October of 2013, OTL
enables this #define for GNU C++ 4.7 and higher, as well as
for Visual C++ 2013 and higher. |
OTL_BIGINT | This #define enables support
for bigint
(signed 64-bit integer) bind variables by specifying a
signed 64-bit integer data type name, for example:#define OTL_BIGINT __int64 // VC++, Borland C++or #define OTL_BIGINT long long // GNU C++ODBC and DB2-CLI support 64-bit integers natively, so does OTL. No 32-bit OCI (OCI7,OCI8, OCI8i, OCI9i, OCI10g, OCI11g) prior to OCI 11.2 supports 64-bit integers, so OTL has to emulate this type of bind variables via strings (char[XXX]). OTL allocates and binds a string variable with a placeholder that is defined as <bigint>. In case if OTL_BIGINT under a 32-bit C++ compiler, and one of the OTL_ORAxx #defines (or OTL_ODBC is used with an ODBC driver that doesn't support 64-bit integers natively) are defined together, the following two defines also need to be enabled: OTL_BIGINT_TO_STR, OTL_STR_TO_BIGINT. In 64-bit OCIs (OTL/ORAXXX) on LP64 platformts, #define OTL_ORA_MAP_BIGINT_TO_LONG can be used to map <bigint> to 64-bit longs on LP64 platforms, which is more efficient than the char[XXX] OCI binding for <bigint>. When OTL_ORA11G_R2 is defined, OTL_BIGINT is supported natively for both 32-bit and 64-bit OCI11.2. |
OTL_BIGINT _TO_STR(n,str) |
This #define is required when
OTL_BIGINT is enabled and when one
of OTL_ORAxx #defines (or OTL_ODBC is used with an ODBC
driver that doesn't support 64-bit integers natively) is
enabled, in order to support OTL internal bigint-to-string
conversion. This #define is supposed to provide
bigint-to-string conversion code that is most probably C++
compiler specific (because 64-bit ints are not part of the
ANSI C++ standard), for example:#if defined(_MSC_VER) // VC++ |
OTL_BIND_VAR _STRICT_ TYPE_ CHECKING_ON |
This #define enables "bind
variable strict type checking", that is, typos in bind
variable data type declarations get checked strictly. OTL,
for performance, checks out as few characters as possible in
a bind variable declaration, in order to recognize a
legitimate data type declaration. Sometimes, it results in
some parts of unrecognized declaration to be left as is,
which, in its turn, causes a database runtime error,
typically, an SQL statement parse error. In most cases, it's
okay, no trouble whatsoever. In very rare cases, depending
on a concrete release of a database API, on a specific
platform, it causes a program core dump / crash. It is recommended to use this #define as part of the "Debug mode", in order to sort out errors of this kind. Then, when compiling in "Release mode", the #define could be dropped. |
OTL_CHECK_IN_TYPE_FUNC, OTL_CHECK_OUT_TYPE_FUNC |
These #defines enable customized "variable
type checking". For example, if an INSERT statement has a
bind variable of "bigint"
(signed 64-bit integer), it is safe to enter a signed 32-bit
integer (int), or signed 16-bit integer (short). Or, a
SELECT statement has an output column of "int", it is safe to
read it into a signed 64-bit integer C++ variable. The
#defines point OTL to user defined predicate functions that
return bool to indicate which data types are compatible. For
example: bool otl_check_in_type_func(int bind_var_type, int cpp_var_type); #define OTL_CHECK_IN_TYPE_FUNC otl_check_in_type_func bool otl_check_out_type_func(int bind_var_type, int cpp_var_type); #define OTL_CHECK_OUT_TYPE_FUNC otl_check_out_type_func #include <otlv4.h> bool otl_check_in_type_func (int bind_var_type, int cpp_var_type) { if(bind_var_type==otl_var_bigint && cpp_var_type==otl_var_int) return true; else return false; } bool otl_check_out_type_func (int bind_var_type, int cpp_var_type) { if(bind_var_type==otl_var_int && cpp_var_type==otl_var_bigint) return true; else return false; } OTL_CHECK_IN_TYPE_FUNC defines a predicate function to indicate data type compatibility of input (in direction from C++ to SQL) variables. OTL_CHECK_OUT_TYPE_FUNC defines a predicate function to indicate data type compatibility of output (in direction from SQL to C++) variables. For more detail on input / output variables, see the OTL stream concept chapter. It is safe to define just one of these #defines. For example, to define OTL_CHECK_IN_TYPE_FUNC and not to define OTL_CHECK_OUT_TYPE_FUNC if customized variable type checking is needed for input variables only. The predicate functions that are passed into OTL via these #defines are used in addition (via "or" operator (||) ) to the default type checking, which tries to match the types of a bind variable and a C++ variable exactly. Also, if OTL_STRICT_NUMERIC_TYPE_CHECK_ON_SELECT is used, these #defines do not override it. |
OTL_CLANG_THREAD_SAFETY_ON |
This #define enables OTL to use thread
safety of CLANG when C++11 and higher is used under
#defines OTL_CPP_11_ON, OTL_CPP14_ON, or OTL_CPP_17_ON. OTL doesn't use
-Wthread-safety-negative because it's still an experimental
feature. -Wthread-safety can be used with this #define. This
#define is recommended for use with CLANG 3.7 and higher. |
OTL_CONNECT_ POOL_ON |
This #define enables OTL connect pooling template
class. The class implement database connection pooling,
which can be used with otl_connect
objects. |
OTL_CONTAINER_ CLASSES_HAVE_ OPTIONAL_MEMBERS |
This #define enables optional
data members (that get enabled by #defines / get
conditionally compiled) in such classes as otl_datetime
(tz_hour, tz_minute), and otl_column_desc
(charset_form, char_size). This allows multiple instances of
OTL (like, say, OTL_ODBC
and OTL_ORA_UTF8)
to be compiled into separate object files and to be linked
together into the same executable. This #define solves the
problem that when optional data members in the
otl_column_desc and otl_datetime classes are enabled,
the classes
become size incompatible between multiple different
instances of OTL in the same program. In order to make them
size compatible again, the optional data members need to be
enabled for all instances of OTL that are enabled to be
built into the same executable (for example, one module
enables OTL_ODBC, another module enables OTL_ORA_UTF8), and
both modules get linked together into the same executable. This #define needs to be enabled in all compilation units (.cpp files). , |
OTL_CPP_11_ON |
This #define enables OTL to
use C++11
features, such as rvalue references, move constructors,
noexcept, nullptr, variadic template functions, etc. OTL
uses C++11 features automatically under Visual C++ 10 or
higher up to Visual C++ 2013 (a.k.a. always on). g++ enables
such feature only when "-std=c++11" command line switch is
used (a.k.a. optional). |
OTL_CPP_14_ON |
This #define enables OTL to use C++14
features, such as std::make_unique, etc. The current list of
C++ compilers that support C++14 includes at least g++ and
clang, and Visual Studio 2015. When this #define is enabled,
it automatically enables #define OTL_CPP_11_ON.
|
OTL_CPP_17_ON | This #define enables OTL to use C++17
features, such as std::uncaught_exceptions(), etc. The
current list of C++ compilers that have some support for
C++17 includes at least g++ 6.1 and higher (-std=c++17), or
CLANG 3.8 and higher (-std=c++1z). When this #define is
enabled, it automatically enables #define OTL_CPP_11_ON,
and #define OTL_CPP_14_ON.
OTL automatically enables this #define when std=c++latest
for Visual C++ 2017 is used. |
OTL_CPP_20_ON | This #define enables OTL to use C++20
features, such as std::span,
etc. When this #define is enabled, it automatically enables
#define OTL_CPP_11_ON,
#define OTL_CPP_14_ON,
and #define OTL_CPP_17_ON.
This #define needs to be enabled explicitly for now, until
C++ compilers fully implement C++20 standard. |
OTL_CPP_23_ON | This #define should be used when C++ compiler
is used in C++23 mode. |
OTL_C_STR_ FOR_UNICODE_ STRING_TYPE |
This #define should be used when #define OTL_UNICODE_STRING_TYPE
is used, and when the specified string class has a "c_str()"
function under a different name, like in wxWidgets
version 2.9.x or higher: wc_str(). This #define specifies
the "c_str()" function name, for example: #define OTL_UNICODE #define OTL_UNICODE_CHAR_TYPE wxChar #define OTL_UNICODE_STRING_TYPE wxString #define OTL_UNICODE_STRING_TYPE_CAST_FROM_CHAR(s, c_ptr, len) {s=wxString(c_ptr,len);} #define OTL_C_STR_FOR_UNICODE_STRING_TYPE wc_str |
OTL_DB2_CLI_ MAP_LONG_ VARCHAR_TO _VARCHAR. |
This #define works in a
combination with #define OTL_DB2_CLI. It
should be used in the case of the DB2 CLI on the client side
and the DB2 OS/390 on the server side, because all VARCHAR
table columns that are >= 255 bytes get reported by the
DB2 CLI as SQL_LONG_VARCHARs (normally reserved for DB2 CLOB
columns). In DB2 UDB distributed (non-OS/390 flavor), all
VARCHARs get reported as DB2 CLI's SQL_VARCHARs. When
#define OTL_DB2_MAP_LONG_VARCHAR_TO_VARCHAR is defined, all
VARCHAR table colunms, which are shorter (<=) than the
value defined by the #define, get mapped to SQL_VARCHAR,
even though the DB2 CLI reports the columns as
SQL_LONG_VARCHARs. For example: #define OTL_DB2_CLI #define OTL_DB2_CLI_MAP_LONG_VARCHAR_TO_VARCHAR 4000 #include <otlv4.h> In this example, all VARCHAR table columns, that are <= 4000 bytes, will be mapped to SQL_VARCHAR, even though the client code connects to a DB2 OS/390 database, and the client DB2 CLI reports the columns as SQL_LONG_VARCHARs. This kind of datatype mapping happens only on SELECT statements, or stored procedures that return a result set. |
OTL_COMPILER_HAS_STD_OPTIONAL |
When #define OTL_STREAM_WITH_STD_OPTIONAL_ON
is enabled, the C++ compiler that compiles OTL supports
C++17 (std::optional), and the actual optional<>
template is std::optional<> instead of
experimental/optional, this #define can be used in order to
resolve a compilation error if one comes up during
compilation. |
OTL_ DEFAULT _CHAR_ NULL_TO_VAL |
When this #define is set to a char value, in the case of a NULL, returned from the database, OTL assigns the value to the variable that is used in operator>>(char&), or in operator(unsigned char&). At the same time, otl_stream::is_null() can be used to check for NULL. This default value is more of a convenience than necessity. |
OTL_ DEFAULT _NUMERIC_ NULL_TO_VAL |
When this #define is set to a numeric value, in the case of a NULL returned from the database, OTL assigns the value to the variable that is used in operator>>(numeric_type&). At the same time, otl_stream::is_null() can be used to check for NULL. This default value is more of a convenience than necessity. |
OTL_DEFAULT _DATETIME _NULL_TO_VAL |
When this #define is set to a value of otl_datetime datatype, in the case of a NULL, returned from the database, OTL assigns the value to to the variable that is used in operator>>(otl_datetime&). At the same time, otl_stream::is_null() can be used to check for NULL. This default value is more of a convenience than necessity. |
OTL_DEFAULT _STRING _NULL_TO_VAL |
When this #define is set to a string value, in the case of a NULL, returned from the database., OTL assigns the value to the variable that is used in operator>>(std::string&), or in operator>>(ACE_TString&), or in a string class, defined by #define USER_DEFINED_STRING_CLASS: oparetor>>(USER_DEFINED_STRING_CLASS&). Also, OTL assigns the value to the variable that is used in operator>>(char*), or in operator>>(unsigned char*). At the same time, otl_stream::is_null() can be used to check for NULL. This default value is more of a convenience than necessity. |
OTL_DESTRUCTORS_ DO_NOT_THROW |
Top C++ experts say that
"throwing destructors" are bad. OTL throws exceptions from
destructors by default in order to communicate database
errors via otl_exceptions. OTL also makes the maximum effort
to detect the stack unwinding situation and not to throw
exceptions from destructors in that case, because that would
result in an immediate program abort. #define
OTL_DESTRUCTORS_DO_NOT_THROW enables try/catch blocks to
prevent OTL destructors from throwing exceptions. See this
for more detail on the topic. It's strongly recommended, if
you enable this #define, to make sure that every single
instance of otl_connect, otl_stream, otl_lob_stream releases
its underlying resources when it goes out of scope. In the
worst case, information about a database error may be lost. |
OTL_DISABLE_ OPERATOR_GT_GT_ FOR_OTL_ VALUE_OTL_ DATETIME |
This #define disables the
operator<<(ostream&,const
otl_value<otl_datetime>&), so that the operator
can be overloaded outside the OTL header file. |
OTL_ENABLE_ MSSQL_MARS |
MS SQL SQL 2005 and 2008
support the Multiple Active Result Sets (MARS), which is not
enabled by default. In order for MARS to be enabled, an ODBC
function call needs to be made. This #define enables the
corresponding ODBC function call. |
OTL_EXCEPTION_ COPIES_INPUT_ STRING_IN_CASE_ OF_OVERFLOW |
This #define allows otl_exceptions to
capture the first XXX characters of a large input string
(VARCHAR value) in case of the OTL defined exception "input
value is too large to fit into the buffer" (code 32005), where XXX is
the size of the corresponding :v<char[XXX]> bind variable. This
#define should be used with #define OTL_EXCEPTION_DERIVED_FROM.
For more detail, see OTL code example 205, 206,
207. |
OTL_EXCEPTION_ INITIALIZED_WITH_BASE_CLASS_ CONSTRUCTOR_CALL |
This #define passes a call to a base class
constructor to the following constructors in
otl_tmpl_exception class when #define is used OTL_EXCEPTION_DERIVED_FROM:
A need for this kind of thing arises when more context
has to be passed into the base class which OTL exception
is derived from. For example: |
OTL_EXCEPTION_ IS_DERIVED_FROM_ STD_EXCEPTION |
This #define is a shortcut
for the following when #define OTL_UNICODE_EXCEPTION_AND_RLOGON
is NOT used: #define OTL_EXCEPTION_DERIVED_FROM std::exception #define OTL_EXCEPTION_HAS_MEMBERS \ virtual const char* what() const \ { \ return reinterpret_cast<const char*>(msg); \ } When #define OTL_UNICODE_EXCEPTION_RLOGON is used, OTL converts the double-byte (UTF-16) character error string into a single-byte character error string and returns a const char* pointer to the single-byte character error string. If you need to get to the double-byte (UTF-16) error message, deriving otl_exception from std::exception is not appropriate, because std::exception::what() returns a const char* instead of const wchar_t*. |
OTL_EXCEPTION _DERIVED_FROM |
This #define allows the otl_exception class to be included into already existing hierarchy of exception classes. The #define should specify a name of already existing class, which is used as part of the exception class hierarchy. The STL exception class hiararchy is a good example. otl_exception can be derived from one of the classes in the hierarchy, so that a catch block, that catches exception of the base class, will be able to catch exceptions of the otl_exception class. In the OTL header file, in case if this #define is defined, the class, defined in the #define, will be specified as a base class for the otl_exception class. |
OTL_EXCEPTION_ ENABLE_ ERROR_ OFFSET |
This #define enables the so
called SQL Statement Parse Error Offset, and it is available
for OTL/OCIx only. When an otl_exception gets thrown, and it
has otl_exception::stm_text field populated, the parse error
offset will point to the actual position of the SQL error. |
OTL_EXCEPTION _HAS_MEMBERS |
This #define allows the user to define new member functions or data members in the otl_exception class. The OTL header file checks out whether this #define is defined, and then the body of the #define gets included textually into the body of the otl_exception class. This simple technique allows the otl_exception class to have new members. This #define can be used in a combination with #define OTL_EXCEPTION_DERIVED_FROM. |
OTL_EXPLICIT _NAMESPACES |
(for turning on namespaces) |
OTL_EXCEPTION_ STM_TEXT_ SIZE |
This #define specifies a new
size for the otl_exception::stm_text
buffer. By default, it's 2048 bytes, that is, the actual
otl_exception will contain only 2047 first bytes of the SQL
statement, associated with the exception. If more bytes of
the SQL statement text is needed, the #define can come
handy.This #define can be used in a combination with #define
OTL_EXCEPTION_ENABLE_ERROR_OFFSET,
for example: #define OTL_EXCEPTION_ENABLE_ERROR_OFFSET #define OTL_EXCEPTION_STM_TEXT_SIZE 32767 |
OTL_EXTENDED _EXCEPTION |
(for enabling the otl_exception's extended fields for OTL/ODBC and OTL/DB2-CLI). This is for fixing problem 47. |
OTL_INITIAL_VAR_LIST_SIZE |
OTL internally uses arrays of bind variable
descriptors. Initial sizes of such arrays are in the range
of [512..1024], depending on the underlying database APIs.
If an SQL statement contains more [than the initial size of
such an array] bind variables, or SELECT output columns, OTL
may resize internal arrays of bind variable descriptors
dynamically, as it parses out the SQL statement. Some
programs open and close OTL streams a lot, and are supposed
to stay up and running for months, which eventually results
in dynamic heap fragmentation. This #define can override the
initial size in order to avoid dynamic heap fragmentation.
It's recommended to lower down the size to, say, 64, or 128
in case if heap fragmentation is of a concern: #define OTL_INITIAL_VAR_LIST_SIZE (128) |
OTL_LEGACY_TRACE_ DATETIME_FORMAT_ON |
This #define needs to be defined in order to
switch the default formatting of the fractional part of the
second (class otl_datetime)
to the old way (before OTL 4.0.364).
The old format doesn't have any leading zeros. For example,
fractional precision is set to 6 and the actual value is
1234. In the old format the value will be .1234, and in the
new format it will be .001234. The format can be completely
overridden with #define OTL_TRACE_FORMAT_DATETIME
and #define OTL_TRACE_FORMAT_TZ_DATETIME. |
OTL_MAPS_SQL_C_FLOAT_ TO_SQL_REAL |
ODBC / DB2-CLI only. This #define
enables mapping of SQL_C_FLOAT to SQL_REAL. By default, OTL
maps SQL_C_FLOAT to SQL_FLOAT. It's normally required when
OTL is used with MS ACCESS ODBC driver, and when floating
point values are used. |
OTL_NO_THROW_ IS_EMPTY_THROW |
OTL uses noexcept, throw(), etc. for its
functions that don't throw exceptions, depending on the
version of the C++ compiler, and what the compiler [being
used] supports. This #define overrides that default
behavior, and uses throw(). So, this #define is recommended
for pre-C++11 compilers that only support throw(). |
OTL_ODBC_CHAR_ SQLWCHAR_CONVERSION_ FUNCS |
Under #define OTL_ODBC,
and #define OTL_DB2_CLI, OTL
converts characters (char) into wide character (SQLWCHAR)
and wide characters into characters on otl_connect::rlogon(), under
#define OTL_EXCEPTION_IS_DERIVED_FROM_STD_EXCEPTION,
etc, when Unicode ODBC drivers are used, and when OTL based
C++ projects enable Unicode Characters, for example, in a
Visual C++ project. Also, national character sets (like,
say, Cyrillic based Eastern European character sets) can be
used at the same time. OTL default
character-to-wide-character conversion functions do type
casts, which is incorrect for Eastern European character
sets. This #define is needed for such use cases. This
#define is used to override the OTL default
character-to-wide-character functions (see example below). // Implement the functions below using proper OS calls to // convert your national character set to wide characters, for example // for Windows, MultiByteToWideChar(). #define OTL_ODBC_CHAR_SQLWCHAR_CONVERSION_FUNCS \ inline void otl_convert_char_to_SQLWCHAR_2 \ (SQLWCHAR *dst, \ const unsigned char *src) \ { \ while (*src) \ *dst++ = static_cast<SQLWCHAR>(*src++); \ *dst = 0; \ } \ \ inline void otl_convert_SQLWCHAR_to_char_2 \ (unsigned char *dst, \ const SQLWCHAR *src) \ { \ while (*src) \ *dst++ = static_cast<unsigned char>(*src++);\ *dst = 0; \ } |
OTL_ORA_SDO_GEOMETRY |
OCI 11/12 or higher. This #define enables
native support for Oracle Spatial Geometry. otl_stream defines operators >> /<< for reading / writing oci_spatial_geometry values to/from the Oracle database. otl_refcur_stream defines operators >> / << for reading / writing oci_spatial_geometry values to/from the Oracle database. For more detail on The Oracle Spatial Geometry type MDSYS.SDO_GEOMETRY, see the corresponding Oracle manual. |
OTL_STD_STRING_VIEW_CLASS | This #define enables OTL support for std::string_view
or std::experimental::string_view.
By default, such OTL support is disabled. This #define
should be defined to have the actual string view class name,
for example: #define OTL_STD_STRING_VIEW_CLASS std::experimental::string_view The feature can only be enabled when OTL is compiled in C++14, or C++17 mode. Respectively, OTL defines #defines OTL_CPP_14_ON / OTL_CPP_17_ON automatically, or the defines should be enabled explicitly before including the OTL header file (otlv4.h). The feature requires #defines OTL_CPP_14_ON / OTL_CPP_17_ON to be present. |
OTL_STD_UNICODE_STRING_ VIEW_CLASS |
This #define enables OTL support for std::basic_string_view
or std::experimental::basic_string_view,
when basic_string_view is used with UTF-16 characters. By
default, such OTL support is disabled. This #define should
be defined to have the actual string view class name, for
example: #define OTL_STD_UNICODE_STRING_VIEW_CLASS \ std::experimental::basic_string_view<UTF-16 character type> The feature can only be enabled when OTL is compiled in C++14, or C++17 mode. Respectively, OTL defines #defines OTL_CPP_14_ON / OTL_CPP_17_ON automatically, or the defines should be enabled explicitly before including the OTL header file (otlv4.h). The feature requires #defines OTL_CPP_14_ON / OTL_CPP_17_ON to be present. Also, this #define requires #define OTL_UNICODE_CHAR_TYPE to be present. |
OTL_THIRD_PARTY_STRING_VIEW_CLASS OTL_THIRD_PARTY_UNICODE_STRING_ VIEW_CLASS |
These #defines can be used when classes from
third party libraries similar to std::string_view /
std::basic_string_view<> are used, for example
boost::string_view / boost::basic_string_view<>. The
only requirement for third party string views is that the
class should provide length() and data() methods. These
#defines do not require #defines OTL_CPP_14_ON /
OTL_CPP_17_ON
to be enabled. Examples: #define OTL_THIRD_PARTY_STRING_VIEW_CLASS boost::string_view #define OTL_THIRD_PARTY_UNICODE_STRING_VIEW_CLASS \ boost::basic_string_view<char16_t> |
OTL_THROW |
This #define allows internal "throw
(otl_exception)" statements inside the OTL header file to be
overridden with a customized version. By default, #define
OTL_THROW(x) is throw x. |
OTL_STRCAT_S OTL_STRCPY_S OTL_STRNCPY_S OTL_SPRINTF_S |
These #defines allow the user to override
what C string functions OTL uses, for example, here's how it
could be done for VC++ 8.0 and higher: #define OTL_STRCAT_S(dest, dest_sz, src) strcat_s(dest, dest_sz, src) #define OTL_STRCPY_S(dest, dest_sz, src) strcpy_s(dest, dest_sz, src) #define OTL_STRNCPY_S(dest, dest_sz, src, count) \ strncpy_s(dest, dest_sz, src, count) #define OTL_SPRINTF_S sprintf_s If there is a need to use more secure than compiler provided C string functions, these #defines should be used. |
OTL_STREAM_CUSTOM_CHAR_LTLT_OPERATORS |
This #define is a customization point for
replacing OTL default otl_stream& operator<<(const
char) and otl_stream& operator<<(const unsigned
char) operators. For example, there is a need to define
otl_stream& operator<<(const int8_t) and
otl_stream& operator<<(const uint8_t) to implement
numeric semantic for uint8_t and int8_t, but the C++
compiler typedefs the types as unsigned char and
signed char. |
OTL_STREAM_WITH_STD_CHAR_ARRAY_ON |
This #define enables OTL support for std::array<char,...>.
By default, such OTL support is disabled for backward
compatibility. For the feature to be enabled, OTL requires
#define OTL_CPP_14_ON to be
defined as well, or when GNU C++ 4.4 is used with
-std=c++0x. The advantage of std::array<char,...>
container is that it does not decay to a pointer to char, so
OTL can check the maximum size of the container, and throw
an otl_exception if
the actual database string value exceeds the maximum size of
the container. |
OTL_STREAM_WITH_STD_UNICODE_CHAR_ ARRAY_ON |
This #define enables OTL support for std::array<char16_t,...>.
By default, such OTL support is disabled for backward
compatibility with older C++ compilers. For the feature to
be enabled, OTL requires #define OTL_CPP_14_ON
to be defined as well, or when GNU C++ 4.4 is used with
-std=c++0x. The advantage of std::array<char16,...> is
that it does not decay to a pointer to char16_t, so OTL can
check the maximum size of the container, and throw an otl_exception if the
actual database string size exceeds the maximum size of the
container. |
OTL_STREAM_WITH_STD_OPTIONAL_ON |
This #define enables OTL support for std::optional<>.
By default, such OTL support is disabled, in order to avoid
conflicts in overloaded operators >> / << in BOOST. |
OTL_STREAM_WITH_STD_SPAN_ON | This #define enables OTL support for
std::span.
By default, such OTL support is disabled for backward
compatibility. For the feature to be enabled, OTL requires
#define OTL_CPP_20_ON to be
defined as well. |
OTL_STREAM_WITH_STD_TUPLE_ON |
This #define enables OTL support for
std::tuple<>.
By default, such OTL support is disabled for backward
compatibility. For the feature to be enabled, OTL requires
#define OTL_CPP_14_ON to be
defined as well, or when GNU C++ 4.4 is used with
-std=c++0x. |
OTL_STREAM_WITH_STD_VARIANT_ON | This #define enables OTL support
for std::variant<>.
By default, such OTL support is disabled for backward
compatibility. For the feature to be enabled, OTL requires
#define OTL_CPP_17_ON to be
defined as well. |
OTL_TRACE_FORMAT_ DATETIME OTL_TRACE_FORMAT_ TZ_DATETIME |
OTL
tracing uses the US date format. In order to change
the date format, say, to Year-Month-Day, the following two
#defines need to be defined: #define OTL_TRACE_FORMAT_TZ_DATETIME(s) \ s.year<<"-"<<s.month<<"-"<<s.day \ <<" "<<s.hour<<":"<<s.minute<<":"<<s.second<<"."<<s.fraction \ <<" "<<s.tz_hour<<":"<<s.tz_minute #define OTL_TRACE_FORMAT_DATETIME(s) \ s.year<<"-"<<s.month<<"-"<<s.day \ <<" "<<s.hour<<":"<<s.minute<<":"<<s.second<<"."<<s.fraction By default, OTL uses the US date format: #define OTL_TRACE_FORMAT_TZ_DATETIME(s) \ s.month<<"/"<<s.day<<"/"<<s.year \ <<" "<<s.hour<<":"<<s.minute<<":"<<s.second<<"."<<s.fraction \ <<" "<<s.tz_hour<<":"<<s.tz_minute #define OTL_TRACE_FORMAT_DATETIME(s) \ s.month<<"/"<<s.day<<"/"<<s.year \ <<" "<<s.hour<<":"<<s.minute<<":"<<s.second<<"."<<s.fraction It is sufficient to overload operator<<(ostream&, const otl_datetime&) and use it in the #define's above. However, some projects that use OTL tracing developed their own stream bridge classes, which are used with OTL tracing. In the case of such a stream bridge class, operator<<(stream_bridge_class&, const otl_datetime&) can be overloaded and used, for example: #define OTL_TRACE_FORMAT_TZ_DATETIME(s) s #define OTL_TRACE_FORMAT_DATETIME(s) s |
OTL_ODBC_ ALTERNATE_ RPC |
This #define should be used
with the PostgreSQL ODBC driver. The driver returns as many
row counts via SQLRowCount() calls as there are rows in a
batch INSERT statement. The #define enables a loop that
fetches all individual row counts and sums them up (+=). As
a result, otl_stream::get_rpc() returns the total, which is
correct for PostgreSQL. Normally, commercially available
ODBC drivers return a single row count on a batch INSERT. |
OTL_ODBC_ LOGOFF_ FREES_ HANDLES |
Some ODBC drivers can't reuse
underlying ODBC connect related resources. In order for OTL
to fully recover from a database failure, the resources need
to be released and new resources need to be allocated for
otl_connect objects to work. Some versions of the Oracle
ODBC driver for Oracle show that type behavior. This #define
forces otl_connect::logoff() to the ODBC connect resources
so that the next call to otl_connect::rlogon() would
allocate the resources again. |
OTL_ODBC_ STRING_TO_ TIMESTAMP |
This #define defines
conversion from the string/varchar format to the timestamp
format (otl_datetime),
for example, PostgreSQL's timestamps with time zone, or MS
SQL Server 2008's datetimeoffset(7): for PostgreSQL: #define OTL_ODBC_STRING_TO_TIMESTAMP(str,tm) \for PostgreSQL and #define OTL_UNICODE: #define OTL_ODBC_STRING_TO_TIMESTAMP(str,tm) \ for MS SQL 2008: #define OTL_ODBC_STRING_TO_TIMESTAMP(str,tm) \for MS SQL 2008 and #define OTL_UNICODE #define OTL_ODBC_STRING_TO_TIMESTAMP(str,tm) \ |
OTL_ODBC_ SQL_STATEMENT_ WITH_DIAG_REC_ OUTPUT |
Some MS SQL Server commands
(like BACKUP, DBCC, etc.) don't use result sets to
communicate their output. They use diagnostic record format
instead. Microsoft recommends to use SQLExecDirect() instead
of SQLPrepare + SQLExecute with such commands. Also, such
commands take time to execute. However, some of the commands
return control back from the SQLExecDirect() call right
away. For example, BACKUP does that. For the BACKUP to
successfully finish, it requires for the statement handle to
be valid. Therefore the OTL direct exec functions
can't be used. In order to work around this limitation, the
otl_stream class was extended to execute MS SQL Server's
BACKUP, DBCC, etc., commands. This #define is needed for the
otl_stream class to recognize these commands. This #define
specifies a function that accepts an SQL statement text, and
returns true, if one of the commands is recognized, for
example: inline bool sql_statement_with_diag_rec_output(const char* stm_text) { if(strncmp(stm_text,"BACKUP",6)==0) return true; else if(strncmp(stm_text,"DBCC",4)==0) return true; else return false; } #define OTL_ODBC_SQL_STATEMENT_WITH_DIAG_REC_OUTPUT \ sql_statement_with_diag_rec_output Also, see otl_stream:::get_next_diag_rec() function, and examples 688, 689. |
OTL_ODBC_ TIMESTAMP_ TO_STRING |
This #define defines
conversion the timestamp format (otl_datetime)
to the string/varchar format, for example, PostgreSQL's
timestamp with time zone, or MS SQL Server 2008's
datetimeoffset(7): for PostgreSQL: #define OTL_ODBC_TIMESTAMP_TO_STRING(tm,str) \for PostgreSQL and #define OTL_UNICODE: #define OTL_ODBC_TIMESTAMP_TO_STRING(tm,str) \for MS SQL 2008: #define OTL_ODBC_TIMESTAMP_TO_STRING(tm,str) \ for MS SQL 2008 and #define OTL_UNICODE: #define OTL_ODBC_TIMESTAMP_TO_STRING(tm,str) \ |
OTL_ODBC_ TIME_ZONE |
This #define enables tz_hour,
and tz_minute fields in the otl_datetime
class. ODBC doesn't support the time zone components yet, so
this #define needs to be used with #define OTL_ODBC_STRING_TO_TIMESTAMP
and #define OTL_ODBC_TIMESTAMP_TO_STRING. |
OTL_ODBC_ USES_SQL_FETCH_ SCROLL_WHEN_ SPECIFIED_IN_ OTL_CONNECT |
This #define enables a workaround for the
following problem. An application enables #define OTL_ODBC, it connects to more than
one type of ODBC drivers, and some of the ODBC drivers don't
implement SQLFetchScroll() well. The application has to
enable #define OTL_ODBC_SQL_EXTENDED_FETCH_ON.
At the same time, say, one of the ODBC drivers that the
application connects to doesn't implement SQLExtendedFetch()
well. In other words, you need access to both
SQLFetchScroll() and SQLExtendedFetch() from the your application, depending on the ODBC driver type. See also set_fetch_scroll_mode(). Example: #define OTL_ODBC #define OTL_ODBC_EXTENDED_FETCH_ON #define OTL_ODBC_USES_SQL_FETCH_SCROLL_WHEN_SPECIFIED_IN_OTL_CONNECT #include <otlv4.h> ... db.rlogon("userid/passwd@DSN"); db.set_fetch_scroll_mode(true); ... #define OTL_ODBC_MULTI_MODE uses SQLExtendedFetch() under the covers, and it can be used with set_fetch_scroll_mode(). |
OTL_FREETDS_ ODBC_ WORKAROUNDS |
FreeTDS/ODBC doesn't seem to
implement the database session's "auto-commit off" mode, so
otl_connect's auto_commit
has
no
effect.
In
order
to
work
around
this
deficiency,
this
#define
should
be
used.
When
the
#define is enabled, OTL executes "begin transaction"
statement before each transaction. otl_connect::commit() or
otl_connect::rollback() can be used to commit or roll back
the transaction. When FreeTDS/ODBC implements the database
session's "auto-commit off" mode, the #define could be
safely removed, because the otl_connect's auto-commit
parameter would take effect on the database sessions. For
the time being, this #define is recommended for use with
FreeTDS/ODBC against MS SQL. otl_connect::auto_commit_on()
/ otl_connect::auto_commit_off()
functions
don't
work
for
MS
SQL
with
FreeTDS/ODBC.
Until
a
fix
becomes
available,
it's
not
recommended
to
use them. Sybase is slightly different: otl_connect::auto_commit_on() / otl_connect::auto_commit_off() functions seem to work even though the otl_connect::rlogon()'s auto_commit doesn't work. After rlogon() has been called, it's recommended to call auto_commit_off(). See Sybase SQL Server / FreeTDS ODBC examples for more detail. In FreeTDS/ODBC, the default for the database session's auto-commit mode is "auto-commit ON", which can't be turned off or on again. When #define OTL_FREETDS_ODBC_WORKAROUNDS is enabled, OTL emulates the database session's "auto-commit off" mode by executing "begin transaction" at the beginning of each transaction, and not executing anything when the "auto-commit" is set to ON, which is the default in FreeTDS/ODBC. Also, FreeTDS/ODBC doesn't support "transaction isolation" level, that is, otl_connect::set_transaction_isolation_level() has no effect. Until the feature is implemented in FreeTDS/ODBC, it's recommended that explicit server side settings should be used instead. For example, MS SQL supports an explicit (NOLOCK) option on the FROM clause in a SELECT statement. Sybase has the "set transaction isolation level X" command to set an explicit, session-wide transaction isolation level. |
OTL_MAP_ SQL_DECIMAL_ TO_OTL_BIGINT |
By default, OTL/ODBC;DB2-CLI
maps internal SQL_DECIMAL to external SQL_C_DOUBLE on SELECT
statements / stored procedures with implicit result sets.
This #define maps SQL_DECIMAL to SQL_C_SBIGINT when #define
OTL_BIGINT is enabled, for example: #define OTL_BIGINT __int64 #define OTL_MAP_SQL_DECIMAL_TO_OTL_BIGINT |
OTL_MAP_ SQL_GUID_ TO_CHAR |
Before OTL 4.0.140, MS SQL
GUIDs (uniqueidentifier) were mapped to char[XXX]. OTL
4.0.140 and higher maps the GUIDs to raw[16], This
#define should be used to map GUIDs to char[XXX] by default.
Of course, the new default mapping can be overridden
manually Also, see example 105. |
OTL_MAP_ SQL_NUMERIC_ TO_OTL_BIGINT |
By default, OTL/ODBC;DB2-CLI maps internal
SQL_NUMERIC to external SQL_C_DOUBLE on SELECT statement /
stored procedures with implicit result sets. This #define
maps SQL_NUMERIC to SQL_C_SBIGINT when #define OTL_BIGINT
is enabled, for example: #define OTL_BIGINT __int64 #define OTL_MAP_SQL_NUMERIC_TO_OTL_BIGINT |
OTL_MAP_SQL_NUMERIC_ TO_OTL_UBIGINT |
By default, OTL/ODBC;DB2-CLI
maps internal SQL_NUMERIC to external SQL_C_DOUBLE on SELECT
statement / stored procedures with implicit result sets.
This #define maps SQL_NUMERIC to SQL_C_UBIGINT when #define
OTL_UBIGINT is enabled, for
example: #define OTL_UBIGINT unsigned long long #define OTL_MAP_SQL_NUMERIC_TO_OTL_UBIGINT |
OTL_MAP_ SQL_BINARY_ TO_CHAR |
Before OTL 4.0.140, MS SQL TIMESTAMPs were mapped to char[XXX], which was effectively a conversion from the binary format to the hexadecimal string format. OTL 4.0.140 and higher maps MS SQL TIMESTAMPs to raw[XXX]. This #define should be used to map MS SQL TIMESTAMPs to the hexadecimal string format by default. Of course, the new default mapping can be overridden manually. |
OTL_MAP_ SQL_VARBINARY_ TO_RAW_LONG |
Before OTL 4.0.140,, "binary" database
types were mapped to raw_long. OTL
4.0.140 and higher maps the "binary" types to raw[XXX] This
#define should be used to map the binary types to raw_long
by default. Of course, the new default mapping can be
overridden manually. Also, see example 346. |
OTL_STREAM_ LEGACY_ BUFFER_ SIZE_TYPE |
OTL 4.0.115 introduces a
larger data type for the OTL stream buffer size:
int instead of the
old short int. The
buffer size parameter was short int for a long time to keep
all OTL based code compatible with older database APIs (like
original OCI7, or restrictions on some old ODBC drivers),
and portable across platforms / databases. Now is time to
move on. "int" as a datatype for the buffer size provides a
much wider value range. "int" is the default for the OTL
stream buffer size parameter from OTL 4.0.115 and on.
However, those old, legacy applications based on OTL cannot
be left behind. This #define, when enabled, turns the old
"short int" type for the buffer size back on. |
OTL_FUNC _THROW_SPEC_ON |
This #define works in a combination with #define OTL_ANSI_CPP (when OTL_ANSI_CPP is defined). This #define enables the function throw specification clause (introduced in OTL 4.0.50) in all OTL functions, to make them explicitly declare what type C++ exceptions the function may throw. It looks like there is no consesus in the C++ community whether function throw specs are good or not, and I decided to make it up to each OTL user to whether enable or not enable this OTL feature by introducing this #define. |
OTL_ORA_CREATE_STORED_ PROC_CALL_MAPS_ RAW_TO_RAW |
OCI 8/9/10/11 only. This #define
should be used when otl_stream::create_stored_proc_call()
needs to map stored procedure parameters of Oracle RAW type
to OTL raw[XXX],
where XXX is the same as the varchar_size
parameter in create_stored_proc_call(). |
OTL_ORA_CREATE_STORED_ PROC_CALL_MAPS_ RAW_TO_RAW_LONG |
OCI 8/9/10/11 only. This #define
should be used when otl_stream::create_stored_proc_call()
needs to map stored procedure parameters of Oracle RAW type
to OTL raw_long. |
OTL_ORA_CUSTOM_MAP_ NUMBER_ON_SELECT |
This #define can be used to
get more performance out of OTL. By default, on SELECT
statements OTL maps Oracle NUMBER values to "double",
because NUMBER data type is not in one-to-one correspondence
with C++ numeric / primitive data types. This #define allows
the user to override the default data type mapping, for
example: #define OTL_ORA_CUSTOM_MAP_NUMBER_ON_SELECT(ftype,elem_size,desc) \ { \ if(ftype==extFloat && desc.prec>4 && desc.prec<=10 && desc.scale==0){ \ ftype=otl_var_int; \ elem_size=sizeof(int); \ }else if(ftype==extFloat && desc.prec>0 && desc.prec<=4 && desc.scale==0){ \ ftype=otl_var_short; \ elem_size=sizeof(short int); \ }else{ \ ftype=otl_var_double; \ elem_size=sizeof(double); \ } \ } ftype is can be set to any of the OTL data type codes. elem_size is the size of the corresponding data type. desc is of the otl_column_desc type. In the example above, NUMBER(X) gets mapped to different C++ data types: X in [1..4] gets mapped to "short int", X in [5..10] gets mapped to "int", and any other NUMBER gets mapped to "double". It's not done by default, because it can be done in many other ways, since NUMBER and C++ primitive / numeric types are not in one-to-one correspondence, as DB2 or MS SQL Server numeric data types and C++ numeric data types are. There is also the old manual data type override. |
OTL_ORA_ CUSTOM_FREE_ TEMP_LOB |
OTL doesn't create temporary
Oracle LOBs, or deallocates temporary LOBs properly, which
requires the following OCI call: OCILobFreeTemporary(). This
#define adds the necessary OCI call, so that OTL handles
temporary LOB (say, allocated by dbms_lob package and passed
back to C++/OTL code) corretly. |
OTL_ORA_ LEGACY_ NUMERIC_TYPES |
This #define should be used
in a combination with #define OTL_ORA10G,
or OTL_ORA10G_R2, or OTL_ORA11G. It disables the use of
OCI10 native SQLT_BDOUBLE / SQLT_BFLOAT bindings, and
reverts to SQLT_FLT bindings, which are
compatible with older versions of the OCI. This define can
be enabled when the Oracle Client 10g, or 11g is used
against older versions of the Oracle server, say, Oracle
9.2. When the OCI10 native SQLT_BDOUBLE / SQLT_BFLOAT
bindings are used via the Oracle 10 Client with, say, Oracle
9i database back end, the bindings don't work, because the
Oracle 9i server doesn't support them. |
OTL_ORA_ MAP_BIGINT_ TO_LONG |
This #define enables the
mapping from <bigint>
for 64-bit OCIs for LP64 platforms to signed 64-bit longs.
It's a more efficient alternative to the char[XXX] binding
and bigint-string / string-bigint conversion (see also the
following #define's: OTL_BIGINT,
OTL_BIGINT_TO_STR,
OTL_STR_TO_BIGINT). |
OTL_MAP_ LONG_TO_ SQL_C_SBIGINT |
In the ODBC/DB2 CLI standard,
SQL_C_SLONG contains 32 bits regardless of the size of "long
int" on a given 64-bit platform. Similarly, SQL_C_SBIGINT is
a signed 64-bit integer, regardless of whether the platform
is 32-bit or 64-bit. ODBC drivers have different
implementations of SQL_C_SLONG, meaning that some ODBC
drivers deviate from the standard. OTL tries to cover all
implementations of the SQL_C_SLONG. This #define maps "long"
(in bind variable declarations) into SQL_C_SBIGINT in case
if sizeof(long) == 8. For example, :v1<long>
will be mapped to SQL_C_SBIGINT, which is a signed 64-bit
integer. |
OTL_NO_TMPL_ MEMBER_ FUNC_ SUPPORT |
OTL 4.0.127 or higher tries
to use template member functions for implementing operator
>>/<< for numeric data types (int, unsigned,
short, long, float, double, signed 64-bit int) for C++
compilers that have support for the feature. However,
even after so many years since the C++ standard was adopted
back in the summer of 1998, some C++ compilers either still
have bugs in their support of the feature, or are missing
any support completely. If that happens, it's possible to
make OTL fall back on the old proven plain nontemplate
member functions. This #define can be used to do just that. |
OTL_NUMERIC_TYPE_1 OTL_NUMERIC_TYPE_1_ID OTL_NUMERIC_TYPE_1_STR_SIZE OTL_STR_TO_NUMERIC_TYPE_1 OTL_NUMERIC_TYPE_1_NO_NUMERIC_STATIC_CASTS OTL_NUMERIC_TYPE_1_TO_STR OTL_NUMERIC_TYPE_2 OTL_NUMERIC_TYPE_2_ID OTL_NUMERIC_TYPE_2_STR_SIZE OTL_STR_TO_NUMERIC_TYPE_2 OTL_NUMERIC_TYPE_2_TO_STR OTL_NUMERIC_TYPE_1_NO_NUMERIC_STATIC_CASTS OTL_NUMERIC_TYPE_3 OTL_NUMERIC_TYPE_3_ID OTL_NUMERIC_TYPE_3_STR_SIZE OTL_STR_TO_NUMERIC_TYPE_3 OTL_NUMERIC_TYPE_3_TO_STR OTL_NUMERIC_TYPE_1_NO_NUMERIC_STATIC_CASTS |
These three sets of #defines
allow OTL to extend the list of supported numeric data types
with up to three more data types. Internally in OTL,
char[XXX] type bind variables will be used, because the
underlying database APIs do not support such numeric data
types. This #define specifies the OTL bind variable data type label for the data type (it needs to be capitalized), for example: #define OTL_NUMERIC_TYPE_1_ID "ULONG" This #define specifies the size of the OTL string bind variable that will hold the string representations of numeric values of the numeric data type, for example: #define OTL_NUMERIC_TYPE_1_STR_SIZE 40 This #define specifies the C++ compiler specific name of the numeric data type, for example: #define OTL_NUMERIC_TYPE_1 unsigned long These two #defines specify string-to-numeric-type / numeric-type-to-string conversion routines, for example: #define OTL_STR_TO_NUMERIC_TYPE_1(str,n) \ { \ sscanf(str,"%lu",&n); \ } #define OTL_NUMERIC_TYPE_1_TO_STR(n,str) \ { \ sprintf(str,"%lu",n); \ } Here is an example of the second set of such #defines: #define OTL_NUMERIC_TYPE_2_ID "UULONG" #define OTL_NUMERIC_TYPE_2_STR_SIZE 40 #define OTL_NUMERIC_TYPE_2 unsigned long long #define OTL_STR_TO_NUMERIC_TYPE_2(str,n) \ { \ sscanf(str,"%llu",&n); \ } #define OTL_NUMERIC_TYPE_2_TO_STR(n,str) \ { \ sprintf(str,"%llu",n); \ } Here is an example of the third set of such #defines: #define OTL_NUMERIC_TYPE_3_ID "LDOUBLE" #define OTL_NUMERIC_TYPE_3_STR_SIZE 60 #define OTL_NUMERIC_TYPE_3 long double #define OTL_STR_TO_NUMERIC_TYPE_3(str,n) \ { \ sscanf(str,"%Lf",&n); \ } #define OTL_NUMERIC_TYPE_3_TO_STR(n,str) \ { \ sprintf(str,"%Lf",n); \ } Here are examples of tables and SQL statements that use bind variables of such extended numeric data types: CREATE TABLE test_tab(f1 NUMBER, f2 VARCHAR2(30)); CREATE TABLE test_tab(f1 DECIMAL(38), f2 VARCHAR(30)); insert into test_tab values(:f1<uulong>,:f2<char[31]>) select f1 :#1<ulong>, f2 from test_tab where f1>=:f<ulong> and f1<=:ff<ulong> select f1 :#1<ldouble>, f2 from test_tab where f1>=:f<ldouble> and f1<=:ff<ldouble> OTL extends its bind variable parser to recognize the defined numeric data type labels, and adds the corresponding operators >> / <<. #define OTL_NUMERIC_TYPE_X_NO_NUMERIC_STATIC_CASTS should be enabled when the "large numeric types" specified in #define OTL_NUMERIC_TYPE_X can't be type cast (via static_cast<>) to regular numeric type like int, double, short int, etc. For example: #define OTL_NUMERIC_TYPE_1_NO_NUMERIC_STATIC_CAST Note: when #define OTL_UNICODE is enabled, the same to/from string conversion macros (see above) can be used. In other words, to/from Unicode string conversion macros do not need to be provided. |
OTL_ORA_ SUBSCRIBE |
This #define enables the otl_subscriber class. The
class is Oracle 9/10 (or higher) specific. It uses the
Oracle Change Notification OCI functions, which allow the
user to get notified about changes on database tables of
interest. This feature is especially useful in an Oracle RAC
environment, though the interface works in a stand alone
Oracle instance. When #define OTL_ORA_SUBSCRIBE is enabled,
the following #define's need to enabled as well:#define OTL_ORA_OCI_ENV_CREATE |
OTL_ORA_UTF8 |
This #define enables
OCI9i/10g support for Oracle UTF8 character encodings (UTF8,
AL32UTF8). This #define is mutually exclusive with #define OTL_UNICODE, which supports
UTF-16. UTF-8 seems to be more popular with Oracle C++
developers at least in Linux/Unix. The basic
difference between UTF-8 and UTF-16 is that UTF-8 is
byte oriented. It's okay to use a '\0' terminated array
of unsigned chars with UTF-8 as opposed to an array of
unsigned 16-bit integers with UTF-16. |
OTL_ORA7_ STRING_TO_ TIMESTAMP |
These #define should be used when #define OTL_ORA_TIMESTAMP
is not defined, but there is a need to use Oracle
TIMESTAMPs. For example: #define OTL_ORA8 // Compile OTL 4.0/OCI8 #define OTL_ORA7_TIMESTAMP_TO_STRING(tm,s) \ { \ sprintf(s, \ "%02d/%02d/%04d %02d:%02d:%02d.%06ld",\ tm.month, \ tm.day, \ tm.year, \ tm.hour, \ tm.minute, \ tm.second, \ tm.fraction \ ); \ } #define OTL_ORA7_STRING_TO_TIMESTAMP(s,tm) \ { \ sscanf(s, \ "%02d/%02d/%04d %02d:%02d:%02d.%06ld", \ &tm.month, \ &tm.day, \ &tm.year, \ &tm.hour, \ &tm.minute, \ &tm.second, \ &tm.fraction \ ); \ } #define OTL_ORA7_STRING_TO_TIMESTAMP and #define OTL_ORA7_TIMESTAMP_TO_STRING should be used together as shown in the example above. These #defines define string-to-timestamp and timestamp-to-string conversion so that operator<</>>(otl_datetime&) can be used transparently with :var<char[XXX]> bind variables. Under #define OTL_UNICODE, the following OTL_ORA7_STRING_TO_TIMESTAMP/OTL_ORA7_TIMESTAMP_TO_STRING can be used: #define OTL_ORA7_TIMESTAMP_TO_STRING(tm,s) \ { \ swprintf(s, \ L"%02d/%02d/%04d %02d:%02d:%02d.%06ld",\ tm.month, \ tm.day, \ tm.year, \ tm.hour, \ tm.minute, \ tm.second, \ tm.fraction \ ); \ } #define OTL_ORA7_STRING_TO_TIMESTAMP(s,tm) \ { \ swscanf(s, \ L"%02d/%02d/%04d %02d:%02d:%02d.%06ld", \ &tm.month, \ &tm.day, \ &tm.year, \ &tm.hour, \ &tm.minute, \ &tm.second, \ &tm.fraction \ ); \ } |
OTL_ORA7_ TIMESTAMP_TO_ STRING |
This #define should be used together with #define OTL_ORA7_STRING_TO_TIMESTAMP. |
OTL_ORA_ MAP_STRINGS_ TO_CHARZ |
OTL normally binds a variable
size string buffer (host variable) with both VARCHAR() and
CHAR() columns. ODBC and DB2 CLI handle variable size vs
padded string comparison semantic correctly for both types
of string columns. OCIx do that a little bit differently.
So, when OTL/OCIx is used with CHAR() columns, if, say, a
WHERE clause has a char[XXX] bind variable, the actual
strings for the WHERE clause need to be padded to the full
length of the CHAR() columns. #define
OTL_ORA_MAP_STRINGS_TO_CHARZ changes the OTL default binding
of string host variables. When the #define is enabled, OTL
makes "CHARZ" type string bindings, which behaves exactly
the same as ODBC / DB2 CLI. However, this type of string
binding has a slightly higher runtime overhead. It's up to
the database developer to make the right decision on
balancing out performance vs protability / readability of
the source code. |
OTL_ORA_ MAX_UNICODE_ VARCHAR_ SIZE |
When OTL_UNICODE,
OTL_ORA8I / OTL_ORA9I
are enabled, and the stream is instantiated with a SELECT
statement that has two (or more) large VARCHAR2(4000), or
NVARCHAR2(2000), Oracle may generate the following error:
ORA-01461 (Invalid length...). The error has to do with the
fact that Oracle (8i/9i) treats large VARCHAR2s / NVARCHAR2s
as LONGs, which means that there may be only one large
VARCHAR2 / NVARCHAR2 in a SELECT statement. The only
workaround that Oracle Corporation recommends for Oracle
8i/9i is that the size of large VARCHAR2s/NVARCHAR2s
on a SELECT statement needs to be limited to 4000 bytes. For
PL/SQL block that have large VARCHAR2/NVARCHAR2 the
workaround doesn't apply, that is, there is no such error,
simply because PL/SQL treats large strings differently.
#define OTL_ORA_MAX_UNICODE_VARCHAR_SIZE implements the
workaround: #define OTL_UNICODE #define OTL_ORA8I //#define OTL_ORA9I int my_max_unicode_varchar_string_size=32000; // in bytes, the number is not precise, // the actual maximum may be higher #define OTL_ORA_MAX_UNICODE_VARCHAR_SIZE (my_max_unicode_varchar_string_size) #include <otlv4.h> ... my_max_unicode_varchar_string_size=32000; // in bytes otl_stream o(...); // PL/SQL block that has large VARCHAR/NVARCHAR strings ... my_max_unicode_varchar_string_size=4000; // in bytes otl_stream s(...); // SELECT statement that has two or more large VARCHAR2/NVARCHAR2 strings my_max_unicode_varchar_string_size=32000; // in bytes ... All of the above is NOT needed under #define OTL_ORA10G / OTL_ORA10G_R2, or when there is no more than one large VARCHAR2/NVARCHAR2 in the same SELECT. Sorry for this complicated stuff: a compliacted bug requires a kludgy fix. The workaround is not needed for Oracle 10g because Oracle 10g changed the architecture, compared with Oracle 9i in how large Unicode VARCHAR2 / NVARCHAR2 are handled inside the Oracle Client / Server. |
OTL_ORA_ OCI_ENV_ CREATE |
This #define can only be used
when one of the following is defined: OTL_ORA8I,
OTL_ORA9I, OTL_ORA10G,
OTL_ORA10G_R2. , OTL_ORA11G, OTL_ORA11G_R2, OTL_ORA12C. The #define enables OCI
Environment Handle initialization via OCIEnvCreate() instead
of the older OCIInitialize() + OCIEnvInit() scheme. I don't
want to go too deep into the discussion of what works, and
what doesn't work. Those who want to use OCIEnvCreate(), be
my guests. |
OTL_ORA_ OCI_ENV_ CREATE_ MODE |
This define should be used in
a combination with #define OTL_ORA_OCI_ENV_CREATE.
When OTL_ORA_OCI_ENV_CREATE_MODE is defined, it overrides
the mode (OCI_DEFAULT/OCI_THREADED) in which OCI environment
handles will be created. For example: #define OTL_ORA_OCI_ENV_CREATE #define OTL_ORA_OCI_ENV_CREATE_MODE OCI_THREADED The problem that this #define is trying to address is that the "default/threaded" mode was passed into otl_connect::otl_initialize() once in the whole program, instead of having to pass the same parameter into all calls to otl_connect::rlogon() or server_attach(). When this #define is enabled, it overrides everything else, so that the custom code wouldn't have to be changed. |
OTL_PARANOID_EOF |
OTL allows otl_stream::operators>>() to
be used in the following way (in a similar way to C++
streams): while(s>>f1>>f2)..., or
if(s>>f1), etc. It's based on the fact that
operators>> return otl_stream&, which is
convertible to int via a special type cast operator defined
in otl_stream. This behavior requires
otl_stream::operators>> not to throw any exceptions
when they read values beyond the end-of-file (fetch
sequence). Sometimes, it's important to catch a
beyond-the-end-of-file read. When this #define is enabled, OTL throws an otl_exception (code 32043). |
OTL_STLPORT_ USES_STD_ ALIAS_NAMESPACE |
When #define OTL_STLPORT
is enabled, OTL can be used with STLPort. STLPort itself
can be configured to use __std_alias namespace, in which
case, #define OTL_STLPORT_USES_STD_ALIAS_NAMESPACE. needs
to be enabled. |
OTL_STREAM_ NO_PRIVATE_ BOOL_ OPERATORS |
By default, OTL makes
otl_stream::operator>>(bool&) and
operator<<(const bool) private because they are not
implemented, and in some cases it is very confusing when the
C++ compiler use a different operator, say, instead of
operator>>(bool&) . It makes it harder to track
down bugs in the code at runtime. By making the
operators private, the runtime bugs of that sort
become more obvious at compile time. However, there may be
legitimate use cases when there is a need to overload
operator>>(bool&) and operator<<(const
bool). This #define, when enabled, prevents OTL from
declaring private operator>>(bool&) and
operator<<(const bool) in the otl_stream class. |
OTL_ORA_ STREAM_POOL_ ASSUMES_SAME_ REF_CUR_ STRUCT_ ON_REUSE |
This #define enables an
optimization for OTL stream
pooling for OTL/OCIx where x is >=8. It is assumed
that reference cursor structures (output column lists) don't
change between calls to PL-SQL blocks / stored procedures,
meaning that the stream pool can cache the output column
descriptions and minimize the total number of calls to the
"describe_column()" function. Some applications that use the
stream pooling may benefit performance-wise. |
OTL_STREAM_ NO_PRIVATE_ UNSIGNED_ LONG_ OPERATORS |
By default, OTL makes otl_stream::operator>>(unsigned long&) and operator<<(const unsigned long) private because they are not implemented, and in some cases it is very confusing when the C++ compiler use a different operator, say, instead of operator>>(unsigned long&). It makes it harder to track down bugs in the code at runtime. By making the operators private, the runtime bugs of that sort become more obvious at compile time. However, there may be legitimate use cases when there is a need to overload operator>>(unsigned long&) and operator<<(const unsigned long). This #define, when enabled, prevents OTL from declaring private operator>>(unsigned long&) and operator<<(const unsigned long) in the otl_stream class. |
OTL_STREAM_ POOL_USES_ STREAM_LABEL_ AS_KEY |
This #define enables the use
of otl_stream labels / SQL statement labels (for more
detail, see sqlstm_label parameter in otl_stream's
constructors and open()
function) as stream pool
keys, when the labels are specified / available. Otherwise,
the stream pool falls back on the SQL statement text as key.
Stream labels are generally used as SQL statement text
replacements in otl_exception,
and are normally shorter than the SQL statements themselves,
therefore the stream pool will be more efficient. |
OTL_STREAM_ THROWS_NOT_CONNECTED_ TO_DATABASE_ EXCEPTION |
This #define enables a check
for if(!otl_connect::connected)
flag in the otl_stream::open() function and in the
otl_stream constructors. The reason is that some database
APIs (OCI for some versions of Oracle on some platforms (the
scope is not quite clear)) return unobvious error codes and
error messages. Therefore, there is a need for a clear error
code / message that OTL would throw the following OTL
defined exception,
instead of relying on the database API to return a clear
error code / message. |
OTL_STRICT_ NUMERIC_TYPE_ CHECK_ ON_SELECT |
By default on an SELECT
statement, or a stored procedure that returns an implcit
result set (ODBC, DB2 CLI) / a reference cursor (PL/SQL),
OTL tries to describe the [SELECT] output columns, and map
internal datatypes to external C++ datatypes. In the
case of internal numeric datatypes, the corresponding
external C++ datatypes may not have exactly the same domain
as the internal datatypes. And, say, the values are being
read into a variable of a third numeric datatype. In this
case OTL has to convert the values from one numeric datatype
to another. This #define
(OTL_STRICT_NUMERIC_TYPE_CHECK_ON_SELECT) enforces the exact
match between the output variable's datatype that's the
internal numeric value is being read into, and the datatype
of the internal value itself. In some cases, as it was mentioned in the previous paragraph, the internal-to-external numeric datatype mapping is not exact. In those case, the numeric [SELECT] column's datatype may be explicitly overridden to ensure the exact match between the internal and the external datatypes. Also, if the external and internal datatypes match exactly, OTL provides a small performance boost by avoiding any numeric datatype conversion. |
OTL_ODBC_SQL _EXTENDED _FETCH_ON |
(for ODBC and DB2-CLI). Forces OTL to generate calls to SQLExtendedFetch (buffer size > 1), or SQLFetch (buffer size ==1), instead of SQLFetchScroll, in case if the ODBC level is greater of equal to ODBC 3.0. This #define is introduced to mainly fix a bug in DB2-CLI in Linux, and some ODBC drivers, when CLOBs/BLOBs are being fetched with SQLFetchScroll(). |
OTL_ODBC_ SELECT_STM_ EXECUTE_ BEFORE_ DESCRIBE |
This #define (#define
OTL_ODBC_SELECT_STM_EXECUTE_BEFORE_DESCRIBE) should be used
in a combination with #define OTL_ODBC, and it changes the
OTL stream's default sequence of ODBC functions in the case
of SELECT statement. The default from sequence is as
follows: SQLPrepare(), SQLDescribeCol(),...,
SQLBindParameter(),..., SQLExecute(), SQLFetch(). New ODBC
drivers tend to do more optmization of database round-trips,
and they return the SELECT column descriptions along with
the first batch of rows. The ODBC specifation calls this
kind of optimization an implementaion detail, and leaves it
up to the implometors of ODBC driver. In the case of such
optimization, the sequence of ODBC function becomes this:
SQPrepare(), SQLBindParameter(),..., SQLExecute(),
SQLDescribeCol(), ..., SQLFetch(). |
OTL_ORA_ DECLARE_ COMMON_ READ_ STREAM_ INTERFACE |
(for OCI8/8i/9i/10g only).
Whe this #define is enabled, OTL declares the following
abstract / interface class which both otl_refcur_stream and otl_stream get derived
from: class otl_read_stream_interface{ public: virtual int is_null(void) = 0; virtual void rewind(void) = 0; virtual int eof(void) = 0; virtual otl_read_stream_interface& operator>>(otl_datetime& s) = 0; virtual otl_read_stream_interface& operator>>(char& c) = 0; virtual otl_read_stream_interface& operator>>(unsigned char& c) = 0; virtual otl_read_stream_interface& operator>>(OTL_STRING_CONTAINER& s) = 0; virtual otl_read_stream_interface& operator>>(char* s) = 0; virtual otl_read_stream_interface& operator>>(unsigned char* s) = 0; virtual otl_read_stream_interface& operator>>(int& n) = 0; virtual otl_read_stream_interface& operator>>(unsigned& u) = 0; virtual otl_read_stream_interface& operator>>(short& sh) = 0; virtual otl_read_stream_interface& operator>>(long int& l) = 0; virtual otl_read_stream_interface& operator>>(float& f) = 0; virtual otl_read_stream_interface& operator>>(double& d) = 0; virtual otl_read_stream_interface& operator>>(otl_long_string& s) = 0; virtual otl_read_stream_interface& operator>>(otl_lob_stream& s) = 0; virtual otl_column_desc* describe_select(int& desc_len) = 0; virtual otl_var_desc* describe_out_vars(int& desc_len); virtual otl_var_desc* describe_next_out_var(void); virtual ~otl_read_stream_interface(){} }; This interface is useful when there is a lot of common code for fetching rows either via otrl_stream or via otl_refcur_stream. |
OTL_ORA_ DOES_NOT_ UNDEF_ MIN_MAX |
OTL/OCI8/8i/9i/10g #undef's
#define min and #define max that are defined in one of the
OCI header files. This was done because in some cases min()
and max() were declared as functions in C++ standard header
files. However, when ATL is used, min() and max() are
defined as #define's in "windef.h". If the OTL header file
is included after the windef.h file, the min() and max()
#defines get #undef''ined by OTL, so the symbols become
unavailable. When #define OTL_ORA_DOES_NOT_UNDEF_MIN_MAX) is
enabled, it makes OTL keep #define min and #define max as
they were defined (if they were defined). |
OTL_ORA _TEXT_ON. |
When fstream.h gets included before the OTL header file, fstream.h declares object "text", which is part of the C++ stream environment. Oracle OCI header files use symbol "text" as well. Depending on the platform and the C++ compiler, symbol "text" is defined in OCI either as a typedef or a #define. In any case, it interferes with the C++ "text", defined in fstream.h. #define OTL_ORA_TEXT_ON is introduced to fix the problem. So, all the user needs to do in order to make fstream.h and the OTL header compile together is to put #define OTL_ORA_TEXT_ON before including the OTL header file, and after #include <fstream.h>. In the case of fstream.h being include after the OTL header file, #define OTL_ORA_TEXT_ON also needs to be defined before the inclusion of the OTL header file. |
OTL_ORA _TIMESTAMP |
This #define enables support
for Oracle 9i's timestamps, timestamps with time zones, and
timestamps with local time zones. The #define forces OTL to
use the OCI "OCIDateTime*" resource instead of the OCI
7-byte date structure. OCIDateTime allows the timestamp
values (down to microseconds and time zone hours/minutes) to
be written and read. OCI 8.1.7 has support for OCIDateTime's
when they are used with Oracle 9i on the back end, meaning
that Oracle Client 8.1.7 can be connected to Oracle 9i, and
OTL_ORA_TIMESTAMP can be enabled at the same time. In other
words, #define OTL_ORA8I, or #define OTL_ORA9I can be
used in a combination with OTL_ORA_TIMESTAMP, if the
underlynig Oracle Client (OCI) libraries have the
corresponding functionality. PL/SQL (index-by) tables of otl_datetime's, that are bound with "tables of timestamps" are not supported this time around, due to some bugs in the OCI code (I just could not track down the problem: no info on metalink.oracle.com, no references on dejanews either, no code samples). TIMESTAMPs, as parametrs in stored procedures, can be used with otl_datetime's. PL/SQL (index-by) table can be boud with "tables of DATEs", as usual. |
OTL_ROLLS_BACK_ BEFORE_LOGOFF |
When this #define is enabled,
OTL calls otl_connect::rollback() to roll back any
uncommitted changes before calling otl_connect::logoff().
This kind of behavior is needed for compatibility with some
third party libraries, frameworks, or transaction monitors. |
OTL_SELECT_STREAM_ ALTERNATE_FETCH |
This #define enables an alternate scheme of
how OTL (pre)fetches rows from SELECT statements / reference
cursors. By default, OTL fetches the next batch of rows as
soon as the end of the current row is reached in terms of
calling otl_stream::operators >>(). Also, when large
objects (CLOBs / BLOBs / TEXT / IMAGE / VARCHAR(MAX), etc.)
are being read via otl_lob_stream
(LOB stream mode), large object columns are normally put at
the end of the SELECT output column list, which is a normal
limitation of ODBC drivers / DB2 CLI. Oracle allows CLOB /
BLOB columns to be put anywhere in the SELECT output column
list. The OTL default fetching algorithm changes the
internal state of the OCI to an incorrect state, which
screws up the lengths of the CLOB / BLOB columns, and their
fetch sequences. The alternate fetching scheme / algorithm doesn't have that kind of side effect. So, if you use Oracle, and if you want to be able to handle a SELECT output column list that contains CLOB / BLOB columns in the middle of the list, and you want to read the CLOB / BLOB columns in the LOB stream mode (via otl_lob_stream), you should enable this #define. All this sounds complicated. To put things in simpler terms, when #define is enabled, OTL tries to defer fetching the next batch of rows as long as possible in the "OTL stream state machine". |
OTL_STR_ _TO_BIGINT(str,n) |
This #define is required when
OTL_BIGINT is
enabled and when one of OTL_ORAxx #defines (or OTL_ODBC is
used with an ODBC driver that doesn't support 64-bit
integers natively) is enabled, in order to support OTL
internal string-to-bigint conversion. This #define is
supposed to provide string-to-bigint conversion code that is
most probably C++ compiler specific (because 64-bit ints are
not part of the ANSI C++ standard), for example:#if defined(__GNUC__) // GNU C++ |
OTL_STREAM_ READ_ ITERATOR_ON |
This #define enables
the OTL
stream read iterator, which provides a JDBC-like getter interface. Typically, OTL stream read iterators can be used with SELECT statements, stored prcoredures that return implicit result sets (ODBC, DB2-CLI), or stored procedures that return referenced cursors (Oracle). |
OTL_THROWS_ ON_SQL_SUCCESS_ WITH_INFO |
OTL/ODBC, OTL/DB2-CLI only.
This #define enables the following function: otl_connect::set_throw_on_sql_success_with_info().
When
the
function
sets
the
"throw
flag",
OTL
throws
an
otl_exceptioin
if
SQLExecute()
/
SQLExecDirect()
returns
SQL_SUCCESS_WITH_INFO.
By
raising
an exception on SQL_SUCCESS_WITH_INFO, OTL makes it possible
to communicate messages that would normally get retrieved
via SQLGetDiagRec() calls right after SQLExecute() /
SQLExecDirect() returns. In order to get the maximum amount
of diagnostic information from the ODBC driver, this #define
should be used in a combination with #define OTL_EXTENDED_EXCEPTION. |
OTL_TRACE_LEVEL OTL_TRACE_ STREAM OTL_TRACE _LINE_PREFIX OTL_TRACE_ LINE_SUFFIX OTL_TRACE_ENABLE_ STREAM_LABELS OTL_TRACE_LEVEL_NOCHECK_ ON_LOGON |
These #defines enable OTL function call tracing.
OTL tracing uses the C++ stream interface (ostream, fstream)
to log OTL function calls with arguments, "this" addresses of
class instances, etc.
Example: unsigned int my_trace_level=#define OTL_TRACE_LEVEL_NOCHECK_ON_LOGON can be used to enable the following OTL trace entry unconditionally: otl_connect(this=...)::rlogon(connect_str="<userid>/*****@<db_ptr>", auto_commit=0); Also, see OTL examples for more detail. |
OTL_STL | (for turning on std::strings, and STL-compliant OTL stream iterators: otl_input_iterator, otl_output_iterator, and including STL header files, like <vector>, etc.) |
OTL_STL_NOSTD _NAMESPACE |
(for excluding namespace std, when #define OTL_STL is on). This is mainly for fixing problem 42. |
OTL_STLPORT | (the same as #define OTL_STL, only for use with STLPort 4.x). This #define makes OTL compile with STLPort. |
OTL_STREAM _POOLING_ON |
(for enabling otl_stream pooling). This #define requires #define OTL_STL to be defined first because STL containers were used in the implementation of the otl_stream pooling. For more detail, see examples 113, 114, 115. |
OTL_UNICODE_ STRING_TYPE_ CAST_FROM_ CHAR |
When #define OTL_UNICODE, #define OTL_ORA8I/9I/10G/10G_R2,
and #define OTL_UNICODE_STRING_TYPE
are enabled, a string class that is different from
std::wstring may be used, for example: #define OTL_ORA10G #defined OTL_UNICODE ... #define OTL_UNICODE_STRING_TYPE my_wide_char_string Let's also assume that my_wide_char_string is not 100% compatible with std::wstring and doesn't have the following function assign(const charT* s, size_type n) In order for OTL to handle wide character strings of my_wide_char_string type, it needs to know how to efficiently make a string out of a raw buffer + the string length. #define OTL_UNICODE_STRING_TYPE_CAST_FROM_CHAR can be used for passing into the OTL layer a piece of code that does the conversion. For example, say, ACE_TString is used: #define OTL_ORA10G_R2 #defined OTL_UNICODE #define OTL_UNICODE_CHAR_TYPE wchar_t #define OTL_UNICODE_STRING_TYPE ACE_TString #define OTL_UNICODE_STRING_TYPE_CAST_FROM_CHAR(s, c_ptr, len) {s.set(c_ptr,len,1);} |
OTL_USER_DEFINED _STRING_CLASS_ON |
(for defining a string class, other than STL's std::string, for reading from/writing to the otl_stream). This #define goes in pair with #define USER_DEFINED_STRING_CLASS, which is used to define the actual string class name.Fore more detail, see examples 119, 120, 121 |
OTL_USER_ DEFINED_STRING_ CLASS_DEFAULT_ NULL_TO_VAL |
This #define
OTL_USER_DEFINED_STRING_CLASS_DEFAULT_NULL_TO_VAL(s) can be
used in the case if a C++ string class is used with OTL in
order to read string values tothe database., and to default
string NULLs to a predefined value. If string NULLs need to
be defaulted to an empty string, some C++ string classes
have more efficient ways than assigning an empty string to
an actual string variable. Especially, when the variable is
used a reusable buffer. For example, ACE_TString has fast_clear(), which keeps the string internal buffer. It just assigns '\0' to the first element of the internal buffer, and sets the length indicator to 0. Here's what the #define should look like in the case of ACE_TString and the desired default value for string NULLs is an empty string: #define OTL_USER_DEFINED_STRING_CLASS_DEFAULT_NULL_TO_VAL(s) {s.fast_clear();} std::string can be cleared via std::string::clear(), for example: #define OTL_USER_DEFINED_STRING_CLASS_DEFAULT_NULL_TO_VAL(s) { s.clear();} This feature becomes more important if your OTL based C++ code starts relying of defaulting string NULLs to a value, especially in a multi-threaded environment, and an inefficient dynamic heap manager. |
OTL_UNICODE |
This #define enables Unicode
string support in OTL for Oracle8i (UCS-2), and for Oracle
9i/10g (UTF-16). Character string lengths change from byte
semantic to character semantic, meaning that sizes are given
in characters rather than in bytes. For more detail, refer
to the corresponding Oracle manuals on nationalization /
globalization. For example, in Oracle 8i, if a string column
is, say, VARCHAR2(60), 60 is the size of the column in
bytes. In Oracle 9i, the size will be characters. In Oracle
10g, it maybe specified in bytes or characters. The OTL
manual is not a substitute for the Oracle manuals. Starting with version 4.0.108, OTL supports Unicode strings for OTL/ODBC, and OTL/DB2-CLI. Unicode string data can be accessed in Oracle via ODBC, MS SQL via ODBC, and DB2 via DB2-CLI/ODBC. |
OTL_UNICODE_ CHAR_TYPE |
This #define is used in a
combinatiuon with OTL_UNICODE.
The #define specifies a compiler specific, 2-byte Unicode
character type.: #define OTL_UNICODE #define OTL_UNICODE_CHAR_TYPE wchar_t When your C++ compiler doesn't have any appropriate Unicode compatible character type, unsigned short can be used instead: #define OTL_UNICODE #define OTL_UNICODE_CHAR_TYPE unsigned short |
OTL_UNICODE_ EXCEPTION_AND_ RLOGON |
This #define enables support
for Unicode otl_exception's
msg, sqlstate data
members, and an otl_connect::rlogon()
function
that
accepts
Unicode
user
id,
password,
and
DSN.
This
#define
should
be
enabled
only
when
#define
UNICODE / _UNICODE is enabled for ODBC / DB2 CLI, In other
words, when Unicode ODBC driver functions are enabled. Also,
it's recommened that this #define be used in a conbination
with OTL_UNICODE, and OTL_UNICODE_CHAR_TYPE,
for example: #define OTL_UNICODE #define OTL_UNICODE_CHAR_TYPE wchar_t #define OTL_UNICODE_EXCEPTION_AND_RLOGON |
OTL_UNICODE_ STRING_TYPE |
This #define enables std::wstring as
2-byte Unicode strings in OTL. It can be used when wstring is based on
a wchar_t that
corresponds to 2-byte Unicode: #define OTL_UNICODE #define OTL_UNICODE_CHAR_TYPE wchar_t #define OTL_UNICODE_STRING_TYPE wstring If your C++ compiler doesn't have std::wstring class defined (say, only std::string is defined), it is possible to instantiate std::basic_string<XXX>, where XXX is your 2-byte Unicode type, for example, your 2-byte Unicode is unsigned short: #include <string> namespace std{ typedef unsigned short my_unicode_char; typedef basic_string<my_unicode_type> my_unicode_string; } #define OTL_UNICODE #define OTL_UNICODE_CHAR_TYPE my_unicode_char #define OTL_UNICODE_STRING_TYPE my_unicode_string More specifically, GNU C++ doesn't implement std::wstring, so the example above should be useful for GNU C++ at least. |
OTL_UNICODE_ USE_ANSI_ ODBC_FUNCS_ FOR_DATA_DICT |
This #define should be used
as a workaround for an MS SQL Server ODBC driver bug. It's
not possible to say for sure what the scope of the bug (or
may be it's a an undocumented feature!) is. OTL provides
access to the data dictionary
ODBC / DB2 CLI functions. When Unicode ODBC function
prototypes are enabled via #define UNICODE / _UNICODE, the
corresponding Unicode ODBC data dictionary functions are
enabled. However, some of the Unicode ODBC data dictionary
functions don't work correctly when the MS SQL Server
Unicode ODBC driver is used. As a workaround, ANSI ODBC data
dictionary functions can be used instead, even if the output
string bind variables are Unicode. #define
OTL_UNICODE_USE_ANSI_ODBC_FUNCS_FOR_DATA_DICT makes OTL
generate ANSI ODBC data dictionary function calls instead of
the Unicode ODBC data dictionary function calls. |
OTL_VALUE _TEMPLATE_ON |
(for enabling otl_value<T>). The otl_value<T> template class can be also enabled with #define OTL_STL. #define OTL_VALUE_TEMPLATE_ON allows the template class to be enabled without turning on STL compliance. Not all C++ compilers scompile OTL under #define OTL_STL. #define OTL_VALUE_TEMPLATE_ON was introduced In order to relax that limitation. Fore more detail, see examples 119, 120, 121 |
OTL_VERSION _NUMBER |
This #define holds the version number of the OTL header file, in which the #define is defined. For example, OTL 4.0.17 is defined as (0x040011L). This #define allows the user to keep track of OTL version numbers, e.g. the #define makes it possible to do more complex conditional compilation. |
OTL_UBIGINT |
Enables native support for ubigint
(unsigned 64-bit int). Not all database APIs have native
support for unsigned 64-bit ints. For example,
Oracle 11g Release 2 (#define OTL_ORA11G_R2)
provides such support, or DB2 UDB. If unsigned 64-bit ints
are needed, but the underlying database API doesn't have any
native support for them, OTL extended
numeric types can be used. In LP64 (64-bit Linux, Solaris, AIX, etc.) platforms, unsigned long (use #define OTL_STREAM_NO_PRIVATE_UNSIGNED_LONG_OPERATORS to disable private operators >>/<< in the otl_stream class.), or unsigned long long can be used. In LLP64 (64-bit Windows, etc.), or plain 32-bit platforms, unsigned long long can be used. For example: #define OTL_UBIGINT unsigned long long unsigned long long is guaranteed to have the same size (sizeof), so it's recommended for portability. |
OTL_UNCAUGHT_ EXCEPTION_OWN_ NAMESPACE |
When OTL is used with STL Port (#define OTL_STLPORT), STL Port library may
be configured not to expose some std:: functions like uncaught_exception()
in its namespace _STL. In order to work around the problem,
#define OTL OTL_UNCAUGHT_EXCEPTION_OWN_NAMESPACE is
introduced: #define OTL_UNCAUGHT_EXCEPTION_OWN_NAMESPACE __std_alias:: ... #include <otlv4.h> This #define tells OTL what namespace to prefix the uncaught_exception() function with. In OTL 4.0.167 or higher, this #define is obsolete, and has no effect. |
In order to compile and link OTL with an underlying database API,
the following header files and libraries of the database API are
needed (<ORACLE_HOME>,
<DB2_HOME>,
<TimesTen_HOME>, and <INFORMIX_HOME> are home
directories for installations of Oracle, DB2, TimesTen, and
Informix):
API |
API header files for Windows |
API libraries for Windows |
OCI7 |
In <ORACLE_HOME>\oci\include |
<ORACLE_HOME>\oci\lib\<compiler_specific>\ociw32.lib |
OCI8 |
In <ORACLE_HOME>\oci\include | <ORACLE_HOME>\oci\lib\<compiler_specific>\oci.lib |
OCI8i |
In <ORACLE_HOME>\oci\include | <ORACLE_HOME>\oci\lib\<compiler_specific>\oci.lib |
OCI9i |
In <ORACLE_HOME>\oci\include | <ORACLE_HOME>\oci\lib\<compiler_specific>\oci.lib |
OCI10g |
In <ORACLE_HOME>\oci\include | <ORACLE_HOME>\oci\lib\<compiler_specific>\oci.lib |
ODBC |
Normally, in one of the C++
compiler system directories, no need to include explicitly. |
Normally, in one of the C++
compiler system directories: odbc32.lib |
DB2 CLI |
In <DB2_HOME>\include |
<DB2_HOME>\lib\db2api.lib <DB2_HOME>\lib\db2cli.lib |
TimesTen ODBC |
• Directly with the TimesTen
ODBC driver in <TimesTen_HOME>\include • Directly with the TimesTen Client ODBC driver in <TimesTen_HOME>\include • With an ODBC driver manager (to be used with #define OTL_ODBC, no TimeSten ODBC extensions are available) normally, in one of the C++ compiler system directories, no need to include explicitly. |
• Directly with the TimesTen
ODBC driver <TimesTen_HOME>\lib\TTEN70.LIB <TimesTen_HOME>\lib\TTDV70.LIB • Directly with the TimesTen Client ODBC driver <TimesTen_HOME>\lib\TTCL70.LIB • With an ODBC driver manager normally, in one of the C++ compiler system directories: odbc32.lib |
Informix CLI |
Default ODBC header files for
the C++ compiler. |
odbc32.lib |
Permission to use, copy, modify and redistribute this document for any purpose is hereby granted without fee, provided that the above copyright notice appear in all copies. THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.