Persistence Properties
JDO and JPA with DataNucleus are highly configurable using persistence properties. When
defining your PersistenceManagerFactory or EntityManagerFactory you have the opportunity
to control many aspects of the persistence process. DataNucleus is perhaps more configurable than
any other JDO/JPA implementation in this respect. This section defines the properties available for
use. Please bear in mind that these properties are only for use with DataNucleus and will not work
with other JDO/JPA implementations. All persistence property names are case-insensitive
Datastore Definition
datanucleus.ConnectionURL
|
Description |
URL specifying the datastore to use for persistence.
Note that this will define the type of datastore as well as the datastore
itself. Please refer to the datastores guides
for the URL appropriate for the type of datastore you're using. |
Range of Values |
|
datanucleus.ConnectionUserName
|
Description |
Username to use for connecting to the DB |
Range of Values |
|
datanucleus.ConnectionPassword
|
Description |
Password to use for connecting to the DB.
See datanucleus.ConnectionPasswordDecrypter
for a way of providing an encrypted password here |
Range of Values |
|
datanucleus.ConnectionDriverName
|
Description |
The name of the (JDBC) driver to use for the DB (for RDBMS only). |
Range of Values |
|
datanucleus.ConnectionFactory
|
Description |
Instance of a connection factory for transactional connections.
This is an alternative to datanucleus.ConnectionURL.
For RDBMS, it must be an instance of javax.sql.DataSource.
See Data Sources. |
Range of Values |
|
datanucleus.ConnectionFactory2
|
Description |
Instance of a connection factory for nontransactional connections.
This is an alternative to datanucleus.ConnectionURL.
For RDBMS, it must be an instance of javax.sql.DataSource.
See Data Sources. |
Range of Values |
|
datanucleus.ConnectionFactoryName
|
Description |
The JNDI name for a connection factory for transactional connections.
For RBDMS, it must be a JNDI name that points to a javax.sql.DataSource object.
See Data Sources. |
Range of Values |
|
datanucleus.ConnectionFactory2Name
|
Description |
The JNDI name for a connection factory for nontransactional connections.
For RBDMS, it must be a JNDI name that points to a javax.sql.DataSource object.
See Data Sources. |
Range of Values |
|
datanucleus.ConnectionPasswordDecrypter
|
Description |
Name of a class that implements org.datanucleus.store.connection.DecryptionProvider
and should only be specified if the password is encrypted in the persistence properties |
Range of Values |
|
General
datanucleus.IgnoreCache
|
Description |
Whether to ignore the cache for queries. If the user sets this to true then
the query will evaluate in the datastore, but the instances returned will be formed
from the datastore; this means that if an instance has been modified and its datastore
values match the query then the instance returned will not be the currently
cached (updated) instance, instead an instance formed using the datastore values. |
Range of Values |
true | false |
datanucleus.Multithreaded
|
Description |
Whether to run the PM/EM multithreaded.
Note that this is a hint only to try to allow thread-safe operations on the PM/EM |
Range of Values |
true | false |
datanucleus.NontransactionalRead
|
Description |
Whether to allow nontransactional reads |
Range of Values |
false | true |
datanucleus.NontransactionalWrite
|
Description |
Whether to allow nontransactional writes |
Range of Values |
false | true |
datanucleus.Optimistic
|
Description |
Whether to use optimistic transactions
(JDO,
JPA).
For JDO this defaults to false and for JPA it defaults to true |
Range of Values |
true | false |
datanucleus.RetainValues
|
Description |
Whether to suppress the clearing of values from persistent instances on transaction
completion. With JDO this defaults to false, whereas for JPA it is true |
Range of Values |
true | false |
datanucleus.RestoreValues
|
Description |
Whether persistent object have transactional field values restored when transaction rollback
occurs. |
Range of Values |
true | false |
datanucleus.Mapping
|
Description |
Name for the ORM MetaData mapping files to use with this PMF. For example if this is
set to "mysql" then the implementation looks for MetaData mapping files called
"{classname}-mysql.orm" or "package-mysql.orm". If this is not specified then the JDO
implementation assumes that all is specified in the JDO MetaData file. |
Range of Values |
|
datanucleus.mapping.Catalog
|
Description |
Name of the catalog to use by default for all classes persisted using this PMF/EMF.
This can be overridden in the MetaData where required, and is optional.
DataNucleus will prefix all table names with this catalog name if the RDBMS supports specification
of catalog names in DDL.
RDBMS datastores only |
Range of Values |
|
datanucleus.mapping.Schema
|
Description |
Name of the schema to use by default for all classes persisted using this PMF/EMF.
This can be overridden in the MetaData where required, and is optional.
DataNucleus will prefix all table names with this schema name if the RDBMS supports specification
of schema names in DDL.
RDBMS datastores only |
Range of Values |
|
datanucleus.tenantId
|
Description |
String id to use as a discriminator on all persistable class tables to restrict data
for the tenant using this application instance
(aka multi-tenancy via discriminator).
RDBMS, MongoDB datastores only |
Range of Values |
|
datanucleus.DetachAllOnCommit |
Description |
Allows the user to select that when a transaction is committed all objects enlisted in that
transaction will be automatically detached. |
Range of Values |
true | false |
datanucleus.detachAllOnRollback |
Description |
Allows the user to select that when a transaction is rolled back all objects
enlisted in that transaction will be automatically detached. |
Range of Values |
true | false |
datanucleus.CopyOnAttach |
Description |
Whether, when attaching a detached object, we create an attached copy or simply
migrate the detached object to attached state |
Range of Values |
true | false |
datanucleus.allowAttachOfTransient |
Description |
When you call EM.merge with a transient object (with PK fields set), if you enable this
feature then it will first check for existence of an object in the datastore with the
same identity and, if present, will merge into that object (rather than just trying
to persist a new object).
The default for JDO is false, and for JPA is true.
|
Range of Values |
true | false |
datanucleus.attachSameDatastore
|
Description |
When attaching an object DataNucleus by default assumes that you're attaching to the same
datastore as you detached from. DataNucleus does though allow you to attach to a different
datastore (for things like replication). Set this to false if you want to attach
to a different datastore to what you detached from |
Range of Values |
true | false |
datanucleus.detachAsWrapped
|
Description |
When detaching, any mutable second class objects (Collections, Maps, Dates etc)
are typically detached as the basic form (so you can use them on client-side
of your application). This property allows you to select to detach as
wrapped objects. It only works with "detachAllOnCommit" situations (not with
detachCopy) currently |
Range of Values |
true | false |
datanucleus.DetachOnClose
|
Description |
This allows the user to specify whether, when a PM/EM
is closed, that all objects in the L1 cache are automatically detached.
Users are recommended to use the datanucleus.DetachAllOnCommit
wherever possible. This will not work in JCA mode.
|
Range of Values |
false | true |
datanucleus.detachmentFields
|
Description |
When detaching you can control what happens to loaded/unloaded fields of
the FetchPlan. The default for JDO is to load any unloaded fields of the
current FetchPlan before detaching. You can also unload any loaded fields
that are not in the current FetchPlan (so you only get the fields you require)
as well as a combination of both options |
Range of Values |
load-fields | unload-fields | load-unload-fields |
datanucleus.maxFetchDepth
|
Description |
Specifies the default maximum fetch depth to use for fetching operations.
The JDO spec defines a default of 1, meaning that only the first level of related
objects will be fetched by default. The JPA spec doesn't provide fetch group control, just
a "default fetch group" type concept, consequently the default there is -1 currently. |
Range of Values |
-1 | 1 | positive integer (non-zero) |
datanucleus.detachedState
|
Description |
Allows control over which mechanism to use to determine the fields to be detached.
By default DataNucleus uses the defined "fetch-groups". Obviously JPA1/JPA2 don't have
that (although it is an option with DataNucleus), so we also allow loaded
which will detach just the currently loaded fields, and all which will
detach all fields of the object (be careful with this option since it, when used with
maxFetchDepth of -1 will detach a whole object graph!) |
Range of Values |
fetch-groups | all | loaded |
datanucleus.TransactionType |
Description |
Type of transaction to use. If running under JavaSE the default is RESOURCE_LOCAL, and
if running under JavaEE the default is JTA. |
Range of Values |
RESOURCE_LOCAL | JTA |
datanucleus.ServerTimeZoneID |
Description |
Id of the TimeZone under which the datastore server is running. If this is not specified
or is set to null it is assumed that the datastore server is running in the same timezone
as the JVM under which DataNucleus is running. |
Range of Values |
|
datanucleus.PersistenceUnitName |
Description |
Name of a persistence-unit to be found in a persistence.xml
file (under META-INF) that defines the persistence properties to use
and the classes to use within the persistence process. |
Range of Values |
|
datanucleus.PersistenceUnitLoadClasses |
Description |
Used when we have specified the persistence-unit name for a PMF/EMF and where we
want the datastore "tables" for all classes of that persistence-unit loading up into the
StoreManager. Defaults to false since some databases are slow so such an operation would
slow down the startup process. |
Range of Values |
true | false |
datanucleus.persistenceXmlFilename |
Description |
URL name of the persistence.xml file that should be used
instead of using "META-INF/persistence.xml". |
Range of Values |
|
datanucleus.datastoreReadTimeout
|
Description |
The timeout to apply to all reads (millisecs).
e.g by query or by PM.getObjectById().
Only applies if the underlying datastore supports it |
Range of Values |
0 | A positive value (MILLISECONDS) |
datanucleus.datastoreWriteTimeout
|
Description |
The timeout to apply to all writes (millisecs).
e.g by makePersistent, or by an update.
Only applies if the underlying datastore supports it |
Range of Values |
0 | A positive value (MILLISECONDS) |
datanucleus.singletonPMFForName
|
Description |
Whether to only allow a singleton PMF for a particular name (the name can be either
the name of the PMF in jdoconfig.xml, or the name of the persistence-unit).
If a subsequent request is made for a PMF with a name that already exists then a
warning will be logged and the original PMF returned. |
Range of Values |
true | false |
datanucleus.singletonEMFForName
|
Description |
Whether to only allow a singleton EMF for persistence-unit.
If a subsequent request is made for an EMF with a name that already exists then a
warning will be logged and the original EMF returned. |
Range of Values |
true | false |
datanucleus.allowListenerUpdateAfterInit
|
Description |
Whether you want to be able to add/remove listeners on the JDO PMF after it is marked as
not configurable (when the first PM is created). The default matches the JDO spec, not allowing
changes to the listeners in use. |
Range of Values |
true | false |
datanucleus.storeManagerType
|
Description |
Type of the StoreManager to use for this PMF/EMF. This has typical values of "rdbms", "mongodb".
If it isnt specified then it falls back to trying to find the StoreManager from the
connection URL. The associated DataNucleus plugin has to be in the CLASSPATH when selecting this.
When using data sources (as usually done in a JavaEE container), DataNucleus cannot find out the
correct type automatically and this option must be set. |
Range of Values |
rdbms | mongodb | alternate StoreManager key |
datanucleus.jmxType
|
Description |
Which JMX server to use when hooking into JMX.
Please refer to the Monitoring Guide |
Range of Values |
default | mx4j |
datanucleus.deletionPolicy
|
Description |
Allows the user to decide the policy when deleting objects. The default is "JDO2" which firstly
checks if the field is dependent and if so deletes dependents, and then for others will null any
foreign keys out. The problem with this option is that it takes no account of whether the user has also
defined <foreign-key> elements, so we provide a "DataNucleus" mode that does the dependent field part first
and then if a FK element is defined will leave it to the FK in the datastore to perform any actions, and
otherwise does the nulling. |
Range of Values |
JDO2 | DataNucleus |
datanucleus.identityStringTranslatorType
|
Description |
You can allow identities input to pm.getObjectById(id) be translated into
valid JDO ids if there is a suitable translator.
See Identity String Translator Plugin
|
Range of Values |
|
datanucleus.identityKeyTranslatorType
|
Description |
You can allow identities input to pm.getObjectById(cls, key) be translated into
valid JDO ids if there is a suitable key translator.
See Identity Key Translator Plugin
|
Range of Values |
|
datanucleus.datastoreIdentityType
|
Description |
Which "datastore-identity" class plugin to use to represent datastore identities.
Refer to Datastore Identity extensions for details. |
Range of Values |
datanucleus | kodo | xcalia | {user-supplied plugin} |
datanucleus.executionContext.maxIdle
|
Description |
Specifies the maximum number of ExecutionContext objects that are pooled ready for use |
Range of Values |
20 | integer value greater than 0 |
datanucleus.executionContext.reaperThread
|
Description |
Whether to start a reaper thread that continually monitors the pool of ExecutionContext
objects and frees them off after they have surpassed their expiration period |
Range of Values |
false | true |
datanucleus.objectProvider.className
|
Description |
Class name for the ObjectProvider to use when managing object state.
The default for RDBMS is ReferentialStateManagerImpl, and is StateManagerImpl for all other datastores. |
Range of Values |
{user-provided class-name} |
datanucleus.useImplementationCreator
|
Description |
Whether to allow use of the implementation-creator (feature of JDO to dynamically
create implementations of persistent interfaces).
Defaults to true for JDO, and false for JPA |
Range of Values |
true | false |
datanucleus.manageRelationships
|
Description |
This allows the user control over whether DataNucleus will try to manage bidirectional
relations, correcting the input objects so that all relations are consistent.
This process runs when flush()/commit() is called.
JDO defaults to true and JPA defaults to
false
You can set it to false if you
always set both sides of a relation when persisting/updating. |
Range of Values |
true | false |
datanucleus.manageRelationshipsChecks
|
Description |
This allows the user control over whether DataNucleus will make consistency checks on
bidirectional relations. If "datanucleus.managedRelationships" is not selected then
no checks are performed. If a consistency check fails at flush()/commit() then
a JDOUserException is thrown.
You can set it to false if you want to omit all consistency checks. |
Range of Values |
true | false |
datanucleus.persistenceByReachabilityAtCommit
|
Description |
Whether to run the "persistence-by-reachability" algorithm at commit() time.
This means that objects that were reachable at a call to makePersistent()
but that are no longer persistent will be removed from persistence.
For performance improvements, consider turning this off. |
Range of Values |
true | false |
datanucleus.classLoaderResolverName
|
Description |
Name of a ClassLoaderResolver to use in class loading. DataNucleus provides a default that
loosely follows the JDO specification for class loading. This property allows the user to
override this with their own class better suited to their own loading requirements. |
Range of Values |
datanucleus | {name of class-loader-resolver plugin} |
datanucleus.primaryClassLoader
|
Description |
Sets a primary classloader for situations where a primary classloader is not accessible. This ClassLoader
is used when the class is not found in the default ClassLoader search path. As example, when the database
driver is loaded by a different ClassLoader not in the ClassLoader search path for JDO or JPA specifications. |
Range of Values |
instance of java.lang.ClassLoader |
datanucleus.plugin.pluginRegistryClassName
|
Description |
Name of a class that acts as registry for plug-ins.
This defaults to org.datanucleus.plugin.NonManagedPluginRegistry (for when not using OSGi).
If you are within an OSGi environment you can set this to org.datanucleus.plugin.OSGiPluginRegistry |
Range of Values |
{fully-qualified class name} |
datanucleus.plugin.pluginRegistryBundleCheck
|
Description |
Defines what happens when plugin bundles are found and are duplicated |
Range of Values |
EXCEPTION | LOG | NONE |
datanucleus.plugin.allowUserBundles
|
Description |
Defines whether user-provided bundles providing DataNucleus extensions will be registered.
This is only respected if used in a non-Eclipse OSGi environment. |
Range of Values |
true | false |
datanucleus.plugin.validatePlugins
|
Description |
Defines whether a validation step should be performed checking for plugin dependencies etc.
This is only respected if used in a non-Eclipse OSGi environment. |
Range of Values |
false | true |
datanucleus.findObject.validateWhenCached |
Description |
When a user calls getObjectById (JDO) or findObject (JPA) and they request validation
this allows the turning off of validation when an object is found in the (L2) cache.
Can be useful for performance reasons, but should be used with care.
Defaults to true for JDO (to be consistent with the JDO spec), and
to false for JPA. |
Range of Values |
true | false |
datanucleus.findObject.typeConversion |
Description |
When calling PM.getObjectById(Class, Object) or EM.find(Class, Object) the second argument really ought to be
the exact type of the primary-key field. This property enables conversion of basic numeric types (Long, Integer, Short)
to the appropriate numeric type (if the PK is a numeric type). Set this to false if you want strict JPA compliance. |
Range of Values |
true | false |
Schema Control
datanucleus.schema.autoCreateAll
|
Description |
Whether to automatically generate any schema, tables, columns, constraints that don't exist.
Please refer to the Schema Guide for more details. |
Range of Values |
true | false |
datanucleus.schema.autoCreateSchema
|
Description |
Whether to automatically generate any schema that doesn't exist. This depends very much on whether the
datastore in question supports this operation.
Please refer to the Schema Guide for more details. |
Range of Values |
true | false |
datanucleus.schema.autoCreateTables
|
Description |
Whether to automatically generate any tables that don't exist.
Please refer to the Schema Guide for more details. |
Range of Values |
true | false |
datanucleus.schema.autoCreateColumns
|
Description |
Whether to automatically generate any columns that don't exist.
Please refer to the Schema Guide for more details. |
Range of Values |
true | false |
datanucleus.schema.autoCreateConstraints
|
Description |
Whether to automatically generate any constraints that don't exist.
Please refer to the Schema Guide for more details. |
Range of Values |
true | false |
datanucleus.autoCreateWarnOnError
|
Description |
Whether to only log a warning when errors occur during the auto-creation/validation process.
Please use with care since if the schema is incorrect errors will likely come up later and this
will postpone those error checks til later, when it may be too late!! |
Range of Values |
true | false |
datanucleus.schema.validateAll
|
Description |
Alias for defining datanucleus.schema.validateTables, datanucleus.schema.validateColumns
and datanucleus.schema.validateConstraints as all true.
Please refer to the Schema Guide for more details. |
Range of Values |
true | false |
datanucleus.schema.validateTables
|
Description |
Whether to validate tables against the persistence definition.
Please refer to the Schema Guide for more details. |
Range of Values |
true | false |
datanucleus.schema.validateColumns
|
Description |
Whether to validate columns against the persistence definition. This refers to the column
detail structure and NOT to whether the column exists or not.
Please refer to the Schema Guide for more details. |
Range of Values |
true | false |
datanucleus.schema.validateConstraints
|
Description |
Whether to validate table constraints against the persistence definition.
Please refer to the Schema Guide for more details. |
Range of Values |
true | false |
datanucleus.readOnlyDatastore
|
Description |
Whether the datastore is read-only or not (fixed in structure and contents). |
Range of Values |
true | false |
datanucleus.readOnlyDatastoreAction
|
Description |
What happens when a datastore is read-only and an object is attempted to
be persisted. |
Range of Values |
EXCEPTION | IGNORE |
datanucleus.generateSchema.database.mode
|
Description |
Whether to perform any schema generation to the database at startup.
Will process the schema for all classes that have metadata loaded at startup (i.e the
classes specified in a persistence-unit). |
Range of Values |
create | drop | drop-and-create | none |
datanucleus.generateSchema.scripts.mode
|
Description |
Whether to perform any schema generation into scripts at startup.
Will process the schema for all classes that have metadata loaded at startup (i.e the
classes specified in a persistence-unit). |
Range of Values |
create | drop | drop-and-create | none |
datanucleus.generateSchema.scripts.create.target
|
Description |
Name of the script file to write to if doing a "create" with the target as "scripts" |
Range of Values |
datanucleus-schema-create.ddl | {filename} |
datanucleus.generateSchema.scripts.drop.target
|
Description |
Name of the script file to write to if doing a "drop" with the target as "scripts" |
Range of Values |
datanucleus-schema-drop.ddl | {filename} |
datanucleus.generateSchema.scripts.create.source
|
Description |
Name of a script file to run to create tables. Can be absolute filename, or URL string |
Range of Values |
{filename} |
datanucleus.generateSchema.scripts.drop.source
|
Description |
Name of a script file to run to drop tables. Can be absolute filename, or URL string |
Range of Values |
{filename} |
datanucleus.generateSchema.scripts.load
|
Description |
Name of a script file to run to load data into the schema. Can be absolute filename, or URL string |
Range of Values |
{filename} |
datanucleus.identifierFactory
|
Description |
Name of the identifier factory to use when generating table/column names etc (RDBMS datastores only).
See also the JDO RDBMS Identifier Guide. |
Range of Values |
datanucleus1 | datanucleus2 | jpox | jpa | {user-plugin-name} |
datanucleus.identifier.namingFactory
|
Description |
Name of the identifier NamingFactory to use when generating table/column names etc (non-RDBMS datastores).
Defaults to "datanucleus2" for JDO and "jpa" for JPA usage. |
Range of Values |
datanucleus2 | jpa | {user-plugin-name} |
datanucleus.identifier.case
|
Description |
Which case to use in generated table/column identifier names.
See also the Datastore Identifier Guide.
RDBMS defaults to UPPERCASE. Cassandra defaults to lowercase |
Range of Values |
UPPERCASE | LowerCase | MixedCase |
datanucleus.identifier.wordSeparator
|
Description |
Separator character(s) to use between words in generated identifiers. Defaults to "_" (underscore) |
datanucleus.identifier.tablePrefix
|
Description |
Prefix to be prepended to all generated table names (if the identifier factory supports it) |
datanucleus.identifier.tableSuffix
|
Description |
Suffix to be appended to all generated table names (if the identifier factory supports it) |
datanucleus.defaultInheritanceStrategy
|
Description |
How to choose the inheritance strategy default for classes where no strategy has been
specified. With JDO2 this will be "new-table" for base classes and
"superclass-table" for subclasses.
With TABLE_PER_CLASS this will be "new-table" for all classes. |
Range of Values |
JDO2 | TABLE_PER_CLASS |
datanucleus.store.allowReferencesWithNoImplementations
|
Description |
Whether we permit a reference field (1-1 relation) or collection of references
where there are no defined implementations of the reference. False means that an
exception will be thrown during schema generation |
Range of Values |
true | false |
Transactions and Locking
datanucleus.transactionIsolation
|
Description |
Select the default transaction isolation level for ALL PM/EM
factories. Some databases do not support all isolation levels, refer to your
database documentation. Please refer to the transaction guides for
JDO and
JPA |
Range of Values |
read-uncommitted | read-committed | repeatable-read | serializable |
datanucleus.SerializeRead
|
Description |
With datastore transactions you can apply locking to objects as they are
read from the datastore. This setting applies as the default for all
PM/EMs obtained. You can also specify this
on a per-transaction or per-query basis (which is often better to avoid
deadlocks etc) |
Range of Values |
true | false |
datanucleus.jtaLocator
|
Description |
Selects the locator to use when using JTA transactions so that DataNucleus can find the JTA TransactionManager.
If this isn't specified and using JTA transactions DataNucleus will search all available locators which could
have a performance impact.
See JTA Locator extension.
If specifying "custom_jndi" please also specify "datanucleus.jtaJndiLocation" |
Range of Values |
jboss | jonas | jotm | oc4j | orion | resin | sap | sun | weblogic | websphere |
custom_jndi | alias of a JTA transaction locator |
datanucleus.jtaJndiLocation
|
Description |
Name of a JNDI location to find the JTA transaction manager from (when using
JTA transactions). This is for the case where you know where it is located. If not
used DataNucleus will try certain well-known locations |
Range of Values |
JNDI location |
datanucleus.datastoreTransactionFlushLimit
|
Description |
For use when using datastore transactions and is the limit on number of dirty
objects before a flush to the datastore will be performed. |
Range of values |
1 | positive integer |
datanucleus.flush.mode
|
Description |
Sets when persistence operations are flushed to the datastore.
MANUAL means that operations will be sent only on flush()/commit().
AUTO means that operations will be sent immediately. |
Range of values |
MANUAL | AUTO |
datanucleus.flush.optimised
|
Description |
Whether to use an "optimised" flush process, changing the order of persists for
referential integrity (as used by RDBMS typically), or whether to just build a
list of deletes, inserts and updates and do them in batches. RDBMS defaults to true, whereas
other datastores default to false (due to not having referential integrity, so gaining from
batching |
Range of values |
true | false |
datanucleus.nontx.atomic
|
Description |
When a user invokes a nontransactional operation they can choose for these changes to go
straight to the datastore (atomically) or to wait until either the next transaction commit,
or close of the PM/EM. Disable this if you want operations to be processed with the next
real transaction. This defaults to true for JDO, and false for JPA |
Range of Values |
true | false |
datanucleus.connectionPoolingType
|
Description |
This property allows you to utilise a 3rd party software package for enabling connection pooling.
When using RDBMS you can select from DBCP, C3P0, Proxool, BoneCP, or dbcp-builtin.
You must have the 3rd party jars in the CLASSPATH to use these options. Please refer to the
Connection Pooling guide for details.
|
Range of Values |
None | DBCP | DBCP2 | C3P0 | Proxool | BoneCP | HikariCP | dbcp-builtin | {others} |
datanucleus.connectionPoolingType.nontx
|
Description |
This property allows you to utilise a 3rd party software package for enabling connection
pooling for nontransactional connections using a DataNucleus plugin.
If you don't specify this value but do define the above value then that is taken by default.
Refer to the above property for more details.
|
Range of Values |
None | DBCP | DBCP2 | C3P0 | Proxool | BoneCP | HikariCP | "dbcp-builtin" | {others} |
datanucleus.connection.nontx.releaseAfterUse
|
Description |
Applies only to non-transactional connections and refers to whether to re-use (pool)
the connection internally for later use. The default behaviour is to close any such
non-transactional connection after use. If doing significant non-transactional processing
in your application then this may provide performance benefits, but be careful about the
number of connections being held open (if one is held open per PM/EM).
|
Range of Values |
true | false |
datanucleus.connection.singleConnectionPerExecutionContext
|
Description |
With an ExecutionContext (PM/EM) we normally allocate one connection for a transaction and close it after the transaction, then a different
connection for nontransactional ops. This flag acts as a hint to the store plugin to obtain and retain a single connection throughout
the lifetime of the PM/EM.
|
Range of Values |
true | false |
datanucleus.connection.resourceType
|
Description |
Resource Type for connection ??? |
Range of Values |
JTA | RESOURCE_LOCAL |
datanucleus.connection.resourceType2
|
Description |
Resource Type for connection 2 |
Range of Values |
JTA | RESOURCE_LOCAL |
Caching
datanucleus.cache.collections
|
Description |
SCO collections can be used in 2 modes in DataNucleus. You can allow DataNucleus to cache the collections contents, or
you can tell DataNucleus to access the datastore for every access of the SCO collection. The default is to use
the cached collection. |
Range of Values |
true | false |
datanucleus.cache.collections.lazy
|
Description |
When using cached collections/maps, the elements/keys/values can be loaded when the object is
initialised, or can be loaded when accessed (lazy loading). The default is to use lazy loading
when the field is not in the current fetch group, and to not use lazy loading when the field
is in the current fetch group. |
Range of Values |
true | false |
datanucleus.cache.level1.type
|
Description |
Name of the type of Level 1 cache to use. Defines the backing map.
See also Cache docs for JDO, and
for JPA |
Range of Values |
soft | weak | strong | {your-plugin-name} |
datanucleus.cache.level2.type
|
Description |
Name of the type of Level 2 Cache to use. Can be used to interface with external
caching products. Use "none" to turn off L2 caching.
See also Cache docs for JDO, and
for JPA |
Range of Values |
none | soft | weak | coherence | ehcache | ehcacheclassbased | cacheonix |
oscache | swarmcache | javax.cache | spymemcached | xmemcached | {your-plugin-name} |
datanucleus.cache.level2.mode
|
Description |
The mode of operation of the L2 cache, deciding which entities are cached.
The default (UNSPECIFIED) is the same as DISABLE_SELECTIVE.
See also Cache docs for JDO, and
for JPA |
Range of Values |
NONE | ALL | ENABLE_SELECTIVE | DISABLE_SELECTIVE | UNSPECIFIED |
datanucleus.cache.level2.storeMode
|
Description |
Whether to use the L2 cache for storing values (set to "bypass" to not store within the
context of the operation) |
Range of Values |
use | bypass |
datanucleus.cache.level2.retrieveMode
|
Description |
Whether to use the L2 cache for retrieving values (set to "bypass" to not retrieve from L2
cache within the context of the operation, i.e go to the datastore) |
Range of Values |
use | bypass |
datanucleus.cache.level2.updateMode
|
Description |
When the objects in the L2 cache should be updated. Defaults to updating at commit AND
when fields are read from a datastore object |
Range of Values |
commit-and-datastore-read | commit |
datanucleus.cache.level2.cacheName
|
Description |
Name of the cache. This is for use with plugins such as the Tangosol cache plugin
for accessing the particular cache. Please refer to the Cache Guide for
JDO or JPA |
Range of Values |
your cache name |
datanucleus.cache.level2.maxSize
|
Description |
Max size for the L2 cache (supported by weak, soft, coherence, ehcache,
ehcacheclassbased, javax.cache) |
Range of Values |
-1 | integer value |
datanucleus.cache.level2.clearAtClose
|
Description |
Whether the close of the L2 cache (when the PMF/EMF closes) should also clear out
any objects from the underlying cache mechanism. By default it will clear objects out
but if the user has configured an external cache product and wants to share objects
across multiple PMF/EMFs then this can be set to false. |
Range of Values |
true | false |
datanucleus.cache.level2.batchSize
|
Description |
When objects are added to the L2 cache at commit they are typically batched. This property
sets the max size of the batch. |
Range of Values |
100 | integer value |
datanucleus.cache.level2.timeout
|
Description |
Some caches (Cacheonix, javax.cache) allow specification of an expiration time for objects
in the cache. This property is the timeout in milliseconds (will be unset meaning use cache
default). |
Range of Values |
-1 | integer value |
datanucleus.cache.level2.readThrough
|
Description |
With javax.cache L2 caches you can configure the cache to allow read-through |
Range of Values |
true | false |
datanucleus.cache.level2.writeThrough
|
Description |
With javax.cache L2 caches you can configure the cache to allow write-through |
Range of Values |
true | false |
datanucleus.cache.level2.storeByValue
|
Description |
With javax.cache L2 caches you can configure the cache to store by value
(as opposed to by reference) |
Range of Values |
true | false |
datanucleus.cache.level2.statisticsEnabled
|
Description |
With javax.cache L2 caches you can configure the cache to enable statistics gathering
(accessible via JMX) |
Range of Values |
false | true |
datanucleus.cache.queryCompilation.type
|
Description |
Type of cache to use for caching of generic query compilations |
Range of Values |
none | soft | weak | strong | {your-plugin-name} |
datanucleus.cache.queryCompilationDatastore.type
|
Description |
Type of cache to use for caching of datastore query compilations |
Range of Values |
none | soft | weak | strong | {your-plugin-name} |
datanucleus.cache.queryResults.type
|
Description |
Type of cache to use for caching query results. |
Range of Values |
none | soft | weak | strong | javax.cache | spymemcached | xmemcached | cacheonix |
{your-plugin-name} |
datanucleus.cache.queryResults.cacheName
|
Description |
Name of cache for caching the query results. |
Range of Values |
datanucleus-query | {your-name} |
datanucleus.cache.queryResults.maxSize
|
Description |
Max size for the query results cache (supported by weak, soft, strong) |
Range of Values |
-1 | integer value |
Validation
datanucleus.validation.mode
|
Description |
Determines whether the automatic lifecycle event validation is in effect.
Defaults to auto for JPA and none for JDO |
Range of Values |
auto | callback | none |
datanucleus.validation.group.pre-persist
|
Description |
The classes to validation on pre-persist callback |
Range of Values |
|
datanucleus.validation.group.pre-update
|
Description |
The classes to validation on pre-update callback |
Range of Values |
|
datanucleus.validation.group.pre-remove
|
Description |
The classes to validation on pre-remove callback |
Range of Values |
|
datanucleus.validation.factory
|
Description |
The validation factory to use in validation |
Range of Values |
|
Value Generation
datanucleus.valuegeneration.transactionAttribute
|
Description |
Whether to use the PM connection or open a new connection.
Only used by value generators that require a connection to the datastore. |
Range of Values |
New | UsePM |
datanucleus.valuegeneration.transactionIsolation
|
Description |
Select the default transaction isolation level for identity generation.
Must have datanucleus.valuegeneration.transactionAttribute set to New
Some databases do not support all isolation levels, refer to your
database documentation. Please refer to the transaction guides for
JDO and
JPA |
Range of Values |
read-uncommitted | read-committed | repeatable-read | serializable |
datanucleus.valuegeneration.sequence.allocationSize
|
Description |
If using JDO3.0 still and not specifying the size of your sequence, this acts
as the default allocation size. |
Range of Values |
10 | (integer value) |
datanucleus.valuegeneration.increment.allocationSize
|
Description |
Sets the default allocation size for any "increment" value strategy.
You can configure each member strategy individually but they fall back to this value
if not set |
Range of Values |
10 | (integer value) |
MetaData
datanucleus.metadata.jdoFileExtension
|
Description |
Suffix for JDO MetaData files. Provides the ability to override the default suffix and also
to have one PMF with one suffix and another with a different suffix, hence allowing
differing persistence of the same classes using different PMF's. |
Range of values |
jdo | {file suffix} |
datanucleus.metadata.ormFileExtension
|
Description |
Suffix for ORM MetaData files. Provides the ability to override the default suffix and also
to have one PMF with one suffix and another with a different suffix, hence allowing
differing persistence of the same classes using different PMF's. |
Range of values |
orm | {file suffix} |
datanucleus.metadata.jdoqueryFileExtension
|
Description |
Suffix for JDO Query MetaData files. Provides the ability to override the default suffix and also
to have one PMF with one suffix and another with a different suffix, hence allowing
differing persistence of the same classes using different PMF's. |
Range of values |
jdoquery | {file suffix} |
datanucleus.metadata.alwaysDetachable
|
Description |
Whether to treat all classes as detachable irrespective of input metadata.
See also "alwaysDetachable" enhancer option. |
Range of values |
false | true |
datanucleus.metadata.ignoreMetaDataForMissingClasses
|
Description |
Whether to ignore metadata for classes that aren't found. Default(false) is to throw an exception. |
Range of values |
false | true |
datanucleus.metadata.xml.validate
|
Description |
Whether to validate the MetaData file(s) for XML correctness (against the DTD) when parsing. |
Range of values |
true | false |
datanucleus.metadata.xml.namespaceAware
|
Description |
Whether to allow for XML namespaces in metadata files. The vast majority of sane people
should not need this at all, but it's enabled by default to allow for those that do (since v3.2.3) |
Range of values |
true | false |
datanucleus.metadata.allowXML
|
Description |
Whether to allow XML metadata. Turn this off if not using any, for performance.
From v3.0.4 onwards |
Range of values |
true | false |
datanucleus.metadata.allowAnnotations
|
Description |
Whether to allow annotations metadata. Turn this off if not using any, for performance.
From v3.0.4 onwards |
Range of values |
true | false |
datanucleus.metadata.allowLoadAtRuntime
|
Description |
Whether to allow load of metadata at runtime. This is intended for the situation
where you are handling persistence of a persistence-unit and only want the
classes explicitly specified in the persistence-unit. |
Range of values |
true | false |
datanucleus.metadata.autoregistration
|
Description |
Whether to use the JDO auto-registration of metadata. Turned on by default |
Range of values |
true | false |
datanucleus.metadata.supportORM
|
Description |
Whether to support "orm" mapping files. By default we use what the datastore plugin
supports. This can be used to turn it off when the datastore supports it but we dont
plan on using it (for performance) |
Range of values |
true | false |
Auto-Start
datanucleus.autoStartMechanism
|
Description |
How to initialise DataNucleus at startup. This allows DataNucleus to read in from
some source the classes that it was persisting for this data store the previous time.
XML stores the information in an XML file for this purpose.
SchemaTable (only for RDBMS) stores a table in the RDBMS for this purpose.
Classes looks at the property datanucleus.autoStartClassNames for a list of classes.
MetaData looks at the property datanucleus.autoStartMetaDataFiles for a list of metadata files
The other option (default) is None (start from scratch each time).
Please refer to the Auto-Start Mechanism Guide for more details.
Alternatively just use persistence.xml to specify the classes and/or mapping files to load at startup.
Note also that "Auto-Start" is for RUNTIME use only (not during SchemaTool).
|
Range of Values |
None | XML | Classes | MetaData | SchemaTable |
datanucleus.autoStartMechanismMode
|
Description |
The mode of operation of the auto start mode. Currently there are 3 values. "Quiet" means that at startup if any errors are
encountered, they are fixed quietly. "Ignored" means that at startup if any errors are encountered they are just ignored.
"Checked" means that at startup if any errors are encountered they are thrown as exceptions. |
Range of values |
Checked | Ignored | Quiet |
datanucleus.autoStartMechanismXmlFile
|
Description |
Filename used for the XML file for AutoStart when using "XML" Auto-Start Mechanism |
datanucleus.autoStartClassNames
|
Description |
This property specifies a list of classes (comma-separated) that are loaded at
startup when using the "Classes" Auto-Start Mechanism. |
datanucleus.autoStartMetaDataFiles
|
Description |
This property specifies a list of metadata files (comma-separated) that are
loaded at startup when using the "MetaData" Auto-Start Mechanism. |
Query control
datanucleus.query.flushBeforeExecution
|
Description |
This property can enforce a flush to the datastore of any outstanding changes just
before executing all queries. If using optimistic transactions any updates are typically
held back until flush/commit and so the query would otherwise not take them into account. |
Range of Values |
true | false |
datanucleus.query.useFetchPlan
|
Description |
Whether to use the FetchPlan when executing a JDOQL query. The default is to use it which means that
the relevant fields of the object will be retrieved. This allows the option of just retrieving the
identity columns. |
Range of Values |
true | false |
datanucleus.query.compileOptimised
|
Description |
The generic query compilation process has a simple "optimiser" to try to iron out potential
problems in users queries. It isn't very advanced yet, but currently will detect and try to fix
a query clause like "var == this" (which is pointless). This will be extended in the future to
handle other common situations |
Range of Values |
true | false |
datanucleus.query.jdoql.allowAll
|
Description |
javax.jdo.query.JDOQL queries are allowed by JDO only to run SELECT queries.
This extension permits to bypass this limitation so that DataNucleus extension
bulk "update" and bulk "delete" can be run. |
Range of Values |
false | true |
datanucleus.query.sql.allowAll
|
Description |
javax.jdo.query.SQL queries are allowed by JDO2 only to run SELECT queries.
This extension permits to bypass this limitation (so for example can execute stored procedures). |
Range of Values |
false | true |
datanucleus.query.checkUnusedParameters
|
Description |
Whether to check for unused input parameters and throw an exception if found.
The JDO and JPA specs require this check and is a good guide to having misnamed
a parameter name in the query for example. |
Range of Values |
true | false |
Datastore Specific
Properties below here are for particular datastores only.
datanucleus.rdbms.datastoreAdapterClassName
|
Description |
This property allows you to supply the class name of the adapter to use for your
datastore.
The default is not to specify this property and DataNucleus will autodetect the
datastore type and use its own internal datastore adapter classes. This allows you
to override the default behaviour where there maybe is some issue with the default
adapter class.
Applicable for RDBMS only |
Range of Values |
(valid class name on the CLASSPATH) |
datanucleus.rdbms.useLegacyNativeValueStrategy
|
Description |
This property changes the process for deciding the value strategy to use when the user has
selected "native"(JDO)/"auto"(JPA) to be like it was with version 3.0 and earlier, so using
"increment" and "uuid-hex".
Applicable for RDBMS only |
Range of Values |
true | false |
datanucleus.rdbms.statementBatchLimit
|
Description |
Maximum number of statements that can be batched. The default is 50 and also applies to
delete of objects.
Please refer to the Statement Batching guide
Applicable for RDBMS only |
Range of Values |
integer value (0 = no batching) |
datanucleus.rdbms.checkExistTablesOrViews
|
Description |
Whether to check if the table/view exists. If false, it disables the automatic generation
of tables that don't exist.
Applicable for RDBMS only |
Range of Values |
true | false |
datanucleus.rdbms.useDefaultSqlType
|
Description |
This property applies for schema generation in terms of setting the default column "sql-type" (when you haven't defined it) and where
the JDBC driver has multiple possible "sql-type" for a "jdbc-type".
If the property is set to false, it will take the first provided "sql-type" from the JDBC driver.
If the property is set to true, it will take the "sql-type" that matches what the DataNucleus "plugin.xml" implies.
Applicable for RDBMS only |
Range of Values |
true | false |
datanucleus.rdbms.initializeColumnInfo
|
Description |
Allows control over what column information is initialised when a table is loaded for the
first time. By default info for all columns will be loaded. Unfortunately some RDBMS are
particularly poor at returning this information so we allow reduced forms to just load the
primary key column info, or not to load any.
Applicable for RDBMS only |
Range of Values |
ALL | PK | NONE |
datanucleus.rdbms.classAdditionMaxRetries
|
Description |
The maximum number of retries when trying to find a class to persist or when validating a
class.
Applicable for RDBMS only |
Range of Values |
3 | A positive integer |
datanucleus.rdbms.constraintCreateMode
|
Description |
How to determine the RDBMS constraints to be created.
DataNucleus will automatically add foreign-keys/indices to handle all relationships, and will
utilise the specified MetaData foreign-key information.
JDO2 will only use the information in the MetaData file(s).
Applicable for RDBMS only
|
Range of Values |
DataNucleus | JDO2 |
datanucleus.rdbms.uniqueConstraints.mapInverse
|
Description |
Whether to add unique constraints to the element table for a map inverse field.
Possible values are true or false.
Applicable for RDBMS only |
Range of values |
true | false |
datanucleus.rdbms.discriminatorPerSubclassTable
|
Description |
Property that controls if only the base class where the discriminator is defined will
have a discriminator column
Applicable for RDBMS only |
Range of values |
false | true |
datanucleus.rdbms.stringDefaultLength
|
Description |
The default (max) length to use for all strings that don't have their column length defined
in MetaData.
Applicable for RDBMS only |
Range of Values |
255 | A valid length |
datanucleus.rdbms.stringLengthExceededAction
|
Description |
Defines what happens when persisting a String field and its length exceeds the length of the
underlying datastore column. The default is to throw an Exception. The other option is to
truncate the String to the length of the datastore column.
Applicable for RDBMS only |
Range of Values |
EXCEPTION | TRUNCATE |
datanucleus.rdbms.useColumnDefaultWhenNull
|
Description |
If an object is being persisted and a field (column) is null, the default behaviour is to look whether the column has a "default" value defined in the datastore
and pass that in. You can turn this off and instead pass in NULL for the column by setting this property to false.
Applicable for RDBMS only |
Range of Values |
true | false |
datanucleus.rdbms.persistEmptyStringAsNull
|
Description |
When persisting en empty string, should it be persisted as null in the datastore.
This is to allow for datastores (Oracle) that dont differentiate between null
and empty string. If it is set to false and the datastore doesnt differentiate then
a special character will be saved when storing an empty string.
Applicable for RDBMS only |
Range of Values |
true | false |
datanucleus.rdbms.query.fetchDirection
|
Description |
The direction in which the query results will be navigated.
Applicable for RDBMS only |
Range of Values |
forward | reverse | unknown |
datanucleus.rdbms.query.resultSetType
|
Description |
Type of ResultSet to create. Note 1) Not all JDBC drivers accept all options.
The values correspond directly to the ResultSet options.
Note 2) Not all java.util.List operations are available for scrolling result sets.
An Exception is raised when unsupported operations are invoked.
Applicable for RDBMS only |
Range of Values |
forward-only | scroll-sensitive | scroll-insensitive |
datanucleus.rdbms.query.resultSetConcurrency
|
Description |
Whether the ResultSet is readonly or can be updated. Not all JDBC drivers support all options.
The values correspond directly to the ResultSet options.
Applicable for RDBMS only |
Range of Values |
read-only | updateable |
datanucleus.rdbms.query.multivaluedFetch
|
Description |
How any multi-valued field should be fetched in a query. 'exists' means use an EXISTS statement hence retrieving all elements for the
queried objects in one SQL with EXISTS to select the affected owner objects. 'none' means don't fetch container elements.
Applicable for RDBMS only |
Range of Values |
exists | none |
datanucleus.rdbms.oracleNlsSortOrder
|
Description |
Sort order for Oracle String fields in queries (BINARY disables native language sorting)
Applicable for RDBMS only |
Range of Values |
LATIN | See Oracle documentation |
datanucleus.rdbms.mysql.engineType
|
Description |
Specify the default engine for any tables created in MySQL.
Applicable to MySQL only |
Range of Values |
InnoDB | valid engine for MySQL |
datanucleus.rdbms.mysql.collation
|
Description |
Specify the default collation for any tables created in MySQL.
Applicable to MySQL only |
Range of Values |
valid collation for MySQL |
datanucleus.rdbms.mysql.characterSet
|
Description |
Specify the default charset for any tables created in MySQL.
Applicable to MySQL only |
Range of Values |
valid charset for MySQL |
datanucleus.rdbms.schemaTable.tableName
|
Description |
Name of the table to use when using auto-start mechanism of "SchemaTable"
Please refer to the JDO Auto-Start guide
Applicable for RDBMS only |
Range of Values |
NUCLEUS_TABLES | Valid table name |
datanucleus.rdbms.connectionProviderName
|
Description |
Name of the connection provider to use to allow failover
Please refer to the Failover guide
Applicable for RDBMS only |
Range of Values |
PriorityList | Name of a provider |
datanucleus.rdbms.connectionProviderFailOnError
|
Description |
Whether to fail if an error occurs, or try to continue and log warnings
Applicable for RDBMS only |
Range of Values |
true | false |
datanucleus.rdbms.dynamicSchemaUpdates
|
Description |
Whether to allow dynamic updates to the schema. This means that upon each insert/update
the types of objects will be tested and any previously unknown implementations of
interfaces will be added to the existing schema.
Applicable for RDBMS only |
Range of Values |
true | false |
datanucleus.rdbms.omitDatabaseMetaDataGetColumns
|
Description |
Whether to bypass all calls to DatabaseMetaData.getColumns(). This JDBC method
is called to get schema information, but on some JDBC drivers (e.g Derby) it can
take an inordinate amout of time. Setting this to true means that your datastore
schema has to be correct and no checks will be performed.
Applicable for RDBMS only |
Range of Values |
true | false |
datanucleus.rdbms.sqlTableNamingStrategy
|
Description |
Name of the plugin to use for defining the names of the aliases of tables
in SQL statements.
Applicable for RDBMS only |
Range of Values |
alpha-scheme | t-scheme |
datanucleus.rdbms.tableColumnOrder
|
Description |
How we should order the columns in a table. The default is to put the fields of
the owning class first, followed by superclasses, then subclasses. An alternative
is to start from the base superclass first, working down to the owner, then
the subclasses
Applicable for RDBMS only |
Range of Values |
owner-first | superclass-first |
datanucleus.rdbms.allowColumnReuse
|
Description |
This property allows you to reuse columns for more than 1 field of a class.
It is false by default to protect the user from erroneously typing in a
column name. Additionally, if a column is reused, the user ought to think about
how to determine which field is written to that column ... all reuse ought to imply
the same value in those fields so it doesn't matter which field is written there, or
retrieved from there.
Applicable for RDBMS only |
Range of Values |
true | false |
datanucleus.rdbms.statementLogging
|
Description |
How to log SQL statements. The default is to log the statement and replace any parameters
with the value provided in angle brackets. Alternatively you can log the statement with any
parameters replaced by just the values (no brackets). The final option is to log the
raw JDBC statement (with ? for parameters).
Applicable for RDBMS only |
Range of Values |
values-in-brackets | values | jdbc |
datanucleus.rdbms.fetchUnloadedAutomatically
|
Description |
If enabled will, upon a request to load a field, check for any unloaded fields
that are non-relation fields or 1-1/N-1 fields and will load them in the same
SQL call.
Applicable for RDBMS only |
Range of Values |
true | false |
datanucleus.rdbms.adapter.informixUseSerialForIdentity
|
Description |
Whether we are using SERIAL for identity columns (instead of SERIAL8).
Applicable for RDBMS only. |
Range of Values |
true | false |
datanucleus.cloud.storage.bucket
|
Description |
This is a mandatory property that allows you to supply the bucket name to store your data.
Applicable for Google Storage, and AmazonS3 only. |
Range of Values |
Any valid string |
datanucleus.hbase.enforceUniquenessInApplication
|
Description |
Setting this property to true means that when a new object is persisted (and its identity
is assigned), no check will be made as to whether it exists in the datastore and that the
user takes responsibility for such checks.
Applicable for HBase only. |
Range of Values |
true | false |
datanucleus.cassandra.compression
|
Description |
Type of compression to use for the Cassandra cluster.
Applicable for Cassandra only. |
Range of Values |
none | snappy |
datanucleus.cassandra.metrics
|
Description |
Whether metrics are enabled for the Cassandra cluster.
Applicable for Cassandra only. |
Range of Values |
true | false |
datanucleus.cassandra.ssl
|
Description |
Whether SSL is enabled for the Cassandra cluster.
Applicable for Cassandra only. |
Range of Values |
true | false |
datanucleus.cassandra.socket.readTimeoutMillis
|
Description |
Socket read timeout for the Cassandra cluster.
Applicable for Cassandra only. |
Range of Values |
|
datanucleus.cassandra.socket.connectTimeoutMillis
|
Description |
Socket connect timeout for the Cassandra cluster.
Applicable for Cassandra only. |
Range of Values |
|